| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
successfully tested, and I'm merging back those changes, which proved to
be good.
Revert "Revert "Cleared up much confusion in PollServiceRequestManager. Here's the history:""
This reverts commit fa2370b32ee57a07f27501152c3c705a883b13d8.
|
| |
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
to avoid kicking the wrong user or multiple wrong users.
|
| |/
|/|
| |
| |
| |
| | |
history:"
This reverts commit e46459ef21e1ee5ceaeca70365a7c881d33b09ce.
|
|\ \ |
|
| | |
| | |
| | |
| | | |
before the first simulator step.
|
|/ /
| |
| |
| |
| |
| | |
When Melanie added the web fetch inventory throttle to core, she made the long poll requests (EQs) effectively be handled on an active loop. All those requests, if they existed, were being constantly dequeued, checked for events (which most often they didn't have), and requeued again. This was an active loop thread on a 100ms cycle!
This fixes the issue. Now the inventory requests, if they aren't ready to be served, are placed directly back in the queue, but the long poll requests aren't placed there until there are events ready to be sent or timeout has been reached.
This puts the LongPollServiceWatcherThread back to 1sec cycle, as it was before.
|
| |
| |
| |
| | |
services throttle thread. Didn't change anything in how that processor is implemented, for better or for worse.
|
| |
| |
| |
| | |
the interface, so that duplicate requests aren't enqueued more than once.
|
| |
| |
| |
| | |
up the cache, because the resource may be here in the meantime
|
|\ \
| |/ |
|
| |
| |
| |
| | |
attachments module implementations. All calls to Scene.AttachmentsModule are checking for null. Ideally, if we support disabling attachments then we need a null attachments module to register with the scene.
|
|/ |
|
| |
|
| |
|
|
|
|
| |
reflect its more generic nature.
|
|
|
|
| |
other one generic, taking any continuation.
|
|
|
|
| |
altogether. Instead, this uses a timer. No sure if it's better or worse, but worth the try.
|
| |
|
| |
|
|
|
|
| |
random crash might be in DoubleQueue instead. See http://pastebin.com/XhNBNqsc
|
|
|
|
| |
a random crash in a load test yesterday
|
|
|
|
| |
Curiously, the number of requests received is always one greater than that shown as handled - needs investigation
|
|\ |
|
| |
| |
| |
| | |
This reverts commit b060ce96d93a33298b59392210af4d336e0d171b.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Revert "Trying to hunt the CPU spikes recently experienced."
This reverts commit ac73e702935dd4607c13aaec3095940fba7932ca.
|
| | |
|
| |
| |
| |
| |
| |
| | |
Revert "Comment out old inbound UDP throttling hack. This would cause the UDP"
This reverts commit 38e6da5522a53c7f65eac64ae7b0af929afb1ae6.
|
| | |
|
| |
| |
| |
| | |
chaotic while ppl are using different versions of opensim. Warning only, but no enforcement.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
initialized in the tests!
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Revert "Trying to reduce CPU usage on logins and TPs: trying radical elimination of all FireAndForgets throughout CompleteMovement. There were 4."
This reverts commit 682537738008746f0aca22954902f3a4dfbdc95f.
|
| |
| |
| |
| | |
all FireAndForgets throughout CompleteMovement. There were 4.
|
| |
| |
| |
| |
| |
| | |
we found the root of the rez delay: the priority scheme BestAvatarResponsiveness, which is currently the default, was the culprit. Changing it to FrontBack made the region rez be a lot more natural.
BestAvatarResponsiveness introduces the region rez delay in cases where the region is full of avatars with lots of attachments, which is the case in CC load tests. In that case, the inworld prims are sent only after all avatar attachments are sent. Not recommended for regions with heavy avatar traffic!
|
| | |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
TriggerOnMakeRootAgent to the end of CompleteMovement.
Justin, if you read this, there's a long story here. Some time ago you placed SendInitialDataToMe at the very beginning of client creation (in LLUDPServer). That is problematic, as we discovered relatively recently: on TPs, as soon as the client starts getting data from child agents, it starts requesting resources back *from the simulator where its root agent is*. We found this to be the problem behind meshes missing on HG TPs (because the viewer was requesting the meshes of the receiving sim from the departing grid). But this affects much more than meshes and HG TPs. It may also explain cloud avatars after a local TP: baked textures are only stored in the simulator, so if a child agent receives a UUID of a baked texture in the destination sim and requests that texture from the departing sim where the root agent is, it will fail to get that texture.
Bottom line: we need to delay sending the new simulator data to the viewer until we are absolutely sure that the viewer knows that its main agent is in a new sim. Hence, moving it to CompleteMovement.
Now I am trying to tune the initial rez delay that we all experience in the CC. I think that when I fixed the issue described above, I may have moved SendInitialDataToMe to much later than it should be, so now I'm moving to earlier in CompleteMovement.
|
| |/
|/|
| |
| |
| |
| |
| | |
handlers.
This adds explicit cap poll handler supporting to the Caps classes rather than relying on callers to do the complicated coding.
Other refactoring was required to get logic into the right places to support this.
|