| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
UploadBakedTextureModule
|
| |
|
|
|
|
|
|
| |
it clear that all non-clientstack callers should be using this rather than RemoveClient() in order to step through the ScenePresence state machine properly.
Adds IScene.CloseAgent() to replace RemoveClient()
|
|
|
|
|
|
|
|
|
|
| |
Patch taken from
http://opensimulator.org/mantis/view.php?id=4905
originally by Greg C.
Fixed to apply to r/23314 commit
ba9daf849e7c8db48e7c03e7cdedb77776b2052f
(cherry picked from commit 4ff9fbca441110cc2b93edc7286e0e9339e61cbe)
|
|
|
|
|
|
| |
to allow
for external handlers.
|
|
|
|
|
|
|
|
| |
hear chat from their source region for some time after teleport completion.
This occurs on v2 teleport since the source region now waits 15 secs before closing the old child agent, which could still receive chat.
This commit introduces a ScenePresenceState.PreClose which is set before the wait, so that ChatModule can check for ScenePresenceState.Running.
This was theoretically also an issue on v1 teleport but since the pause before close was only 2 secs there, it was not noticed.
|
|
|
|
| |
occuring on every login and entity transfer
|
|
|
|
| |
client without a queue to include the event message name.
|
|
|
|
|
|
|
|
|
|
| |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|
|
|
|
|
|
| |
cap is something other than "localhost". A new interface for handling
external caps is supported with an example implemented for Simian. The
only linden cap supporting this interface right now is the GetTexture
cap.
|
|
|
|
|
|
|
| |
HGMapModule, hooking on to SimulatorFeatures.OnSimulatorFeaturesRequest event (similar to what the DynamicMenuModule does).
Only HG Visitors get this var, to avoid spamming local users.
The config var is now called MapTileURL, to be consistent with the login one, and its being picked up from either [LoginService], [HGWorldMap] or [SimulatorFeatures], just because I have a bad memory.
|
|
|
|
|
|
| |
OSDMap GridServices to OpenSimExtras, normalized the url keys under it, and moved ExportEnabled to under it too. Melanie: change your viewer code accordingly.
Documentation at http://opensimulator.org/wiki/SimulatorFeatures_Extras
|
|
|
|
|
|
| |
join/drop appropriately, invitechatboxes.
The major departure from flotsam is to send only one message per destination region, as opposed to one message per group member. This reduces messaging considerably in large groups that have clusters of members in certain regions.
|
|
|
|
| |
because DisableSimulator is now being sent via UDP
|
|
|
|
|
|
|
|
|
| |
as needed. (Although sometimes Justin's state machine kicks in and doesn't let you) The EventQueues are a hairy mess, and it's very easy to mess things up. But it looks like this commit makes them work right. Here's what's going on:
- Child and root agents are only closed after 15 sec, maybe
- If the user comes back, they aren't closed, and everything is reused
- On the receiving side, clients and scene presences are reused if they already exist
- Caps are always recreated (this is where I spent most of my time!). It turns out that, because the agents carry the seeds around, the seed gets the same URL, except for the root agent coming back to a far away region, which gets a new seed (because we don't know what was its seed in the departing region, and we can't send it back to the client when the agent returns there).
|
|
|
|
|
|
| |
requests timeout in 60 secs.
There's plenty of room for improvement in handling the EQs. Some other time...
|
|
|
|
|
|
|
|
| |
successfully tested, and I'm merging back those changes, which proved to
be good.
Revert "Revert "Cleared up much confusion in PollServiceRequestManager. Here's the history:""
This reverts commit fa2370b32ee57a07f27501152c3c705a883b13d8.
|
|
|
|
|
|
| |
history:"
This reverts commit e46459ef21e1ee5ceaeca70365a7c881d33b09ce.
|
|
|
|
|
|
| |
When Melanie added the web fetch inventory throttle to core, she made the long poll requests (EQs) effectively be handled on an active loop. All those requests, if they existed, were being constantly dequeued, checked for events (which most often they didn't have), and requeued again. This was an active loop thread on a 100ms cycle!
This fixes the issue. Now the inventory requests, if they aren't ready to be served, are placed directly back in the queue, but the long poll requests aren't placed there until there are events ready to be sent or timeout has been reached.
This puts the LongPollServiceWatcherThread back to 1sec cycle, as it was before.
|
|
|
|
| |
random crash might be in DoubleQueue instead. See http://pastebin.com/XhNBNqsc
|
|
|
|
| |
a random crash in a load test yesterday
|
|
|
|
| |
Curiously, the number of requests received is always one greater than that shown as handled - needs investigation
|
|\ |
|
| | |
|
|/
|
|
|
|
|
| |
handlers.
This adds explicit cap poll handler supporting to the Caps classes rather than relying on callers to do the complicated coding.
Other refactoring was required to get logic into the right places to support this.
|
|
|
|
| |
others, in preparation for experiments to direct baked texture uploads to a robust instance. No functional or configuration changes -- should work exactly as before.
|
|
|
|
| |
not set
|
|
|
|
|
|
|
|
|
| |
to print various counts of capability invocation by user and by cap
This currently prints caps requests received and handled, so that overload of received compared to handled or deadlock can be detected.
This involves making BaseStreamHandler and BaseOutputStream record the ints, which means inheritors should subclass ProcessRequest() instead of Handle()
However, existing inheriting classes overriding Handle() will still work, albeit without stats recording.
"show caps" becomes "show caps list" to disambiguate between show caps commands
|
| |
|
| |
|
|
|
|
|
|
| |
enabled.
This also moves the abort to RemoveRegion() rather than a destructor.
|
|\ |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
they're not in the requested caps list.
The previous wrong behavior caused the debug setting "UseHTTPInventory" to fail
on all viewers when turned off. UDB inventory would not be correctly used in
that case.
|
| |
|
|
|
|
| |
For debugging purposes.
|
|
|
|
|
|
| |
This tells the viewer to enable the UI for export permissions.
WARNING: If your inventory store contains invalid flags data, this will result
in items becoming exportable! Don't turn this on in production until it's complete!
|
|
|
|
| |
with our own and add export permissions as well as a new definition for "All" as meaning "all conventional permissions" rather than "all possible permissions"
|
| |
|
|
|
|
|
|
| |
request the capability was always used instead of the original requesting agent for each cap.
Should address http://opensimulator.org/mantis/view.php?id=6556
|
| |
|
|
|
|
| |
OpenSim.Region.ClientStack.Linden.Caps.dll
|
| |
|
| |
|
| |
|