| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
activity timeout, a root connection triggers a CloseAgent to a neighbour region which has already closed the agent due to inactivity.
Also separates out log messages to distinguish between close not finding an agent and wrong auth token, and downgrades former to debug and latter to warn
|
|
|
|
|
|
|
| |
going to that region first.
If this is attempted, they get a "Try moving closer. Can't sit on object because it is not in the same region as you." message instead, which is the same as current ll grid.
Sitting on ground is okay, since viewer navigates avatar to required region first before sitting.
|
|
|
|
|
|
| |
should better allow you to run it in multiple region scenarios (but why would you really want to do that?) Source in OpenSimLibs.
* Fixed a null ref during shutdown.
|
|
|
|
| |
This will allow viewers to fix broken wearables as they detect them.
|
| |
|
|
|
|
|
|
| |
regression test for v2 transfers.
Also adjusts names of teleport setup helpers in EntityTransferHelpers
|
|
|
|
| |
transfer protocl
|
|
|
|
|
|
| |
should be kept open after teleport-end rather than doing this in the ET Module
This is safer since the close check in IncomingCloseAgent() is done under lock conditions, which prevents a race between ETM and Scene.AddClient()
|
|
|
|
|
|
|
| |
This approach has problems if a client quits without sending a proper logout but then reconnects before the connection is closed due to inactivity.
In this case, the DoNotCloseAfterTeleport was wrongly set.
The simplest approach is to close child agents on teleport as quickly as possible so that races are very unlikely to occur
Hence, this code now closes child agents as the first action after a sucessful teleport.
|
|
|
|
|
|
|
|
| |
Scene.IncomingCloseAgent() instead.
IncomingCloseAgent() now sets the scene presence state machine properly, which is necessary to avoid races between multiple sources of close.
Hence, it's also necessary for everyone to consistently call IncomingCloseAgent()
Calling RemoveClient() directly is currently generating an attention-grabbing exception though this right now this is harmless.
|
|
|
|
|
|
| |
This is giving much better results on teleports between simulators over my lan where for some reason there is a pause before the receiving simulator processes UpdateAgent()
At this point, v2 teleports between neighbour and non-neighbour regions on a single simulator and between v2 simulators and between a v1 and v2 simulator
are working okay for me in different scenarios (e.g. simple teleport, teleport back to original quickly and re-teleport, teleport back to neighbour and re-teleport. etc.)
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
(A,B,C) where the A root agent is still closed, terminating the connection.
This was occuring because teleport to B did not set DoNotCloseAfterTeleport on A as it was a neighbour (where it isn't set to avoid the issue where the source region doesn't send Close() to regions that are still neighbours (hence not resetting DoNotCloseAfterTeleport).
Fix here is to still set DoNotCloseAfterTeleport if scene presence is still registered as in transit from A
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|\ \ |
|
| |/
| |
| |
| | |
for now. Once the code churn on teleport ends, I can find a better solution
|
| |
| |
| |
| |
| |
| | |
add minor details to some log messages, rename a misleading local variable name.
No functional changes.
|
|/ |
|
|
|
|
| |
estate bans also, and delete the obsolete config var.
|
|
|
|
| |
position. That was a complete overkill that is unnecessary at this point.
|
|
|
|
| |
foreigners if OutboundPermission is false
|
|
|
|
| |
permissions by not allowing objects to be taken to inventory.
|
| |
|
|
|
|
|
|
|
| |
cap is something other than "localhost". A new interface for handling
external caps is supported with an example implemented for Simian. The
only linden cap supporting this interface right now is the GetTexture
cap.
|
|
|
|
| |
handler from both Groups modules in core, and replaced them with an operation on IGroupsModule.
|
|
|
|
|
|
| |
join/drop appropriately, invitechatboxes.
The major departure from flotsam is to send only one message per destination region, as opposed to one message per group member. This reduces messaging considerably in large groups that have clusters of members in certain regions.
|
|
|
|
|
| |
This is a test setup failure since code paths when adding a duplicate root scene presence now require the EntityTransferModule to be present.
Test fixed by adding this module to test setup
|
|
|
|
| |
week's SIMULATOR/0.1 protocol for now.
|
|
|
|
|
| |
These were genuine failures caused by ScenePresence.CompleteMovement() waiting for an UpdateAgent from NPC introduction that would never come.
Instead, we do not wait if the agent is an NPC.
|
| |
|
|
|
|
|
|
|
|
|
| |
as needed. (Although sometimes Justin's state machine kicks in and doesn't let you) The EventQueues are a hairy mess, and it's very easy to mess things up. But it looks like this commit makes them work right. Here's what's going on:
- Child and root agents are only closed after 15 sec, maybe
- If the user comes back, they aren't closed, and everything is reused
- On the receiving side, clients and scene presences are reused if they already exist
- Caps are always recreated (this is where I spent most of my time!). It turns out that, because the agents carry the seeds around, the seed gets the same URL, except for the root agent coming back to a far away region, which gets a new seed (because we don't know what was its seed in the departing region, and we can't send it back to the client when the agent returns there).
|
|
|
|
| |
DoNotClose to DoNotCloseAfterTeleport
|
|
|
|
|
|
|
|
| |
was a previous root agent, do not close that child agent at the end of the 15 sec teleport timer.
This prevents an issue if the user teleports back to the neighbour simulator of a source before 15 seconds have elapsed.
This more closely emulates observed linden behaviour, though the timeout there is 50 secs and applies to all the pre-teleport agents.
Currently sticks a DoNotClose flag on ScenePresence though this may be temporary as possibly it could be incorporated into the ETM state machine
|
|
|
|
| |
that eliminates the temporary placement at infinity upon TPs
|
| |
|
| |
|
|
|
|
| |
besides ViaLogin.
|
|
|
|
|
|
|
|
| |
the destination is being checked)
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...
|
|
|
|
|
|
| |
15sec before closing the agent. This seems to be working fairly well. The viewer seems to have an 8 sec delay between UseCircuitCode and CompleteMovement.
Also added back the position on UpdateAgent, because it's needed for TPing between neighboring regions.
|
| |
|
|
|
|
| |
(Note: different from the defunct see_into_neighboring_sim, which used to control the process from the other end). This enables child agents in neighbors for which the root agent doesn't have permission to be in.
|
| |
|
| |
|
| |
|
|
|
|
| |
lkalif for telling me how to route the information. The viewer effect is under the distance filter, so only avatars with cameras < 10m away see the beams.
|
|
|
|
| |
is generating the effect and the cameras of the observers. In particular, this applies to LookAt (which is really verbose and occurs every time users move the mouse) and Beam (which doesn't occur that often, but that can be extremely noisy (10.sec) when it happens)
|
| |
|
|
|
|
|
|
| |
- The existing event to scene has been split into 2: OnAgentUpdate and OnAgentCameraUpdate, to better reflect the two types of updates that the viewer sends. We can run one without the other, which is what happens when the avie is still but the user is camming around
- Added thresholds (as opposed to equality) to determine whether the update is significant or not. I thin these thresholds are ok, but we can play with them later
- Ignore updates of HeadRotation, which were problematic and aren't being used up stream
|
|
|
|
| |
the interface, so that duplicate requests aren't enqueued more than once.
|