| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
been deleted or has no spawn points.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | | |
2009 and was superseded by ISimulationService
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
CorssRegion, TeleportFinishEvent).
Have Simian grid service return the region size.
Many teleport related debug log messages. Can be removed when teleport
works (like that's ever going to happen).
Conflicts:
OpenSim/Framework/RegionInfo.cs
|
| |
| |
| |
| |
| |
| |
| | |
CorssRegion, TeleportFinishEvent).
Have Simian grid service return the region size.
Many teleport related debug log messages. Can be removed when teleport
works (like that's ever going to happen).
|
| |
| |
| |
| | |
with a passed region size. This time in the map code and grid services code.
|
| |
| |
| |
| | |
was wrong for large regions anyway.
|
| |
| |
| |
| |
| |
| | |
Rename 'RegionWorldLocX' to 'WorldLocX' and same for Y and Z.
This keeps the downward compatibility and follows the scheme of 'region'
and 'world' location naming that is happening in the Util module.
|
| |
| |
| |
| |
| | |
Intermediate checkin of changing border cross computation from checking
boundry limits to requests to GridService. Not totally functional.
|
| |
| |
| |
| |
| |
| |
| | |
Routines in Util to compute region world coordinates from region coordinates
as well as the conversion to and from region handles. These routines have
replaced a lot of math scattered throughout the simulator.
Should be no functional changes.
|
| |
| |
| |
| | |
to check for border crossings based on the size of the region.
|
| |
| |
| |
| |
| |
| |
| | |
stack.
Implement both LoadTerrain and StoreTerrain for all DBs.
Move all database blob serialization/deserialization into TerrainData.
|
| |
| |
| |
| | |
LLClientView to use same. This passes a terrain info class around rather than passing a one dimensional array thus allowing variable regions. Update the database storage for variable region sizes. This should be downward compatible (same format for 256x256 regions).
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
-- addition of varaible region size in X and Y
-- internal storage of heightmap changed from double[] to short[]
-- helper routines for handling internal structure while keeping existing API
-- to and from XML that adds region size information (for downward compatibility,
output in the legacy XML format if X and Y are 256)
Updated and commented Constants.RegionSize but didn't change the name for compatibility.
|
|/
|
|
|
|
|
|
| |
count number to integer world coordinates.
Added new methods RegionWorldLoc[XY].
Refactored name of 'RegionLoc*' to 'LegacyRegionLoc*' throughout OpenSim.
Kept old 'RegionLoc*' entrypoint to RegionInfo for downward compatability
of external region management packages.
|
|
|
|
| |
the code that this is symmetric with CloseAgent()
|
|
|
|
|
|
| |
it clear that all non-clientstack callers should be using this rather than RemoveClient() in order to step through the ScenePresence state machine properly.
Adds IScene.CloseAgent() to replace RemoveClient()
|
|
|
|
|
|
|
|
| |
LLUDPServer.HandleCompleteMovementIntoRegion() to fix race condition regression in commit 7dbc93c (Wed Sep 18 21:41:51 2013 +0100)
This check is necessary to close a race condition where the CompleteAgentMovement processing could proceed when the UseCircuitCode thread had added the client to the client manager but before the ScenePresence had registered to process the CompleteAgentMovement message.
This is most probably why the message appeared to get lost on a proportion of entity transfers.
A better long term solution may be to set the IClientAPI.SceneAgent property before the client is added to the manager.
|
|
|
|
| |
debugging purposes
|
|
|
|
|
|
|
| |
a root agent."
This reverts commit c7ded0618c303f8c24a91c83c2129292beebe466.
This proves not to be necessary - the necessary checks are already being done via QueryAccess() before cross or teleport
|
|
|
|
| |
avatars are not allowed to cross into a neighbour where they are not authorized, even if a child agent was allowed.
|
|
|
|
| |
verb-noun is consistent with other similar methods
|
|
|
|
|
|
|
| |
explicit that there is a problem if it still finds the agent to be a child if the sender wanted to wait till it became root
Add some comments about the mssage sequence, though much more data is at
http://opensimulator.org/wiki/Teleports
|
| |
|
|
|
|
|
|
| |
agent.
Relevant if a child agent has been allowed into the region which should not be upgraded to a root agent.
|
|
|
|
|
|
|
|
|
|
| |
so that they work.
This is necessary because the hypergrid groups checks (as referenced by estates) require an agent circuit to be present to construct the hypergrid ID.
However, this is not around until Scene.NewUserConnection(), as called by CreateAgent() in EntityTransferModule.
Therefore, if we're dealing with a hypergrid user, delay the check until NewUserConnection()/CreateAgent()
The entity transfer impact should be minimal since CreateAgent() is the next significant call after NewUserConnection()
However, to preserve the accuracy of query access we will only relax the check for HG users.
|
|
|
|
|
|
|
|
| |
hear chat from their source region for some time after teleport completion.
This occurs on v2 teleport since the source region now waits 15 secs before closing the old child agent, which could still receive chat.
This commit introduces a ScenePresenceState.PreClose which is set before the wait, so that ChatModule can check for ScenePresenceState.Running.
This was theoretically also an issue on v1 teleport but since the pause before close was only 2 secs there, it was not noticed.
|
|
|
|
| |
This causes extreme console spam if a simulator running latest master and one running 0.7.5 have adjacent regions occupied by avatars.
|
|
|
|
| |
commit.
|
|
|
|
|
|
| |
activity timeout, a root connection triggers a CloseAgent to a neighbour region which has already closed the agent due to inactivity.
Also separates out log messages to distinguish between close not finding an agent and wrong auth token, and downgrades former to debug and latter to warn
|
|
|
|
|
|
| |
should be kept open after teleport-end rather than doing this in the ET Module
This is safer since the close check in IncomingCloseAgent() is done under lock conditions, which prevents a race between ETM and Scene.AddClient()
|
|
|
|
|
|
|
| |
This approach has problems if a client quits without sending a proper logout but then reconnects before the connection is closed due to inactivity.
In this case, the DoNotCloseAfterTeleport was wrongly set.
The simplest approach is to close child agents on teleport as quickly as possible so that races are very unlikely to occur
Hence, this code now closes child agents as the first action after a sucessful teleport.
|
|
|
|
|
|
|
|
| |
Scene.IncomingCloseAgent() instead.
IncomingCloseAgent() now sets the scene presence state machine properly, which is necessary to avoid races between multiple sources of close.
Hence, it's also necessary for everyone to consistently call IncomingCloseAgent()
Calling RemoveClient() directly is currently generating an attention-grabbing exception though this right now this is harmless.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
(A,B,C) where the A root agent is still closed, terminating the connection.
This was occuring because teleport to B did not set DoNotCloseAfterTeleport on A as it was a neighbour (where it isn't set to avoid the issue where the source region doesn't send Close() to regions that are still neighbours (hence not resetting DoNotCloseAfterTeleport).
Fix here is to still set DoNotCloseAfterTeleport if scene presence is still registered as in transit from A
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| | |
add minor details to some log messages, rename a misleading local variable name.
No functional changes.
|
|/ |
|
|
|
|
| |
estate bans also, and delete the obsolete config var.
|
| |
|
|
|
|
|
|
|
|
|
| |
as needed. (Although sometimes Justin's state machine kicks in and doesn't let you) The EventQueues are a hairy mess, and it's very easy to mess things up. But it looks like this commit makes them work right. Here's what's going on:
- Child and root agents are only closed after 15 sec, maybe
- If the user comes back, they aren't closed, and everything is reused
- On the receiving side, clients and scene presences are reused if they already exist
- Caps are always recreated (this is where I spent most of my time!). It turns out that, because the agents carry the seeds around, the seed gets the same URL, except for the root agent coming back to a far away region, which gets a new seed (because we don't know what was its seed in the departing region, and we can't send it back to the client when the agent returns there).
|
|
|
|
| |
DoNotClose to DoNotCloseAfterTeleport
|
|
|
|
|
|
|
|
| |
was a previous root agent, do not close that child agent at the end of the 15 sec teleport timer.
This prevents an issue if the user teleports back to the neighbour simulator of a source before 15 seconds have elapsed.
This more closely emulates observed linden behaviour, though the timeout there is 50 secs and applies to all the pre-teleport agents.
Currently sticks a DoNotClose flag on ScenePresence though this may be temporary as possibly it could be incorporated into the ETM state machine
|
| |
|
|
|
|
|
|
|
|
| |
the destination is being checked)
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...
|
|
|
|
|
|
| |
15sec before closing the agent. This seems to be working fairly well. The viewer seems to have an 8 sec delay between UseCircuitCode and CompleteMovement.
Also added back the position on UpdateAgent, because it's needed for TPing between neighboring regions.
|