| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
| |
used, but made for a very mispleading read of the code in the callers.
|
|
|
|
|
|
|
| |
nonworking ownership assignment in SOG, which messed things up before.
No longer trust the client to send the ID of the person something is copied
as, since it allows to run a script with someone else's permissions. Properly
adjust inventory ownership and perms.
|
| |
|
|
|
|
|
|
| |
isn't present in the presence dictionary
the code to do this was there but was being circumvented by newmap[agentID] before the check actually took place
|
|
|
|
| |
list of all locations fresh for every scene presence on every frame, we will instead compute the list once every 50 frames and send to all connected presences at that time. Also, we only add 60 items to the list when there are more than 60 presences in the scene. For 1000 users, this change yields a 99.8% reduction in list processing and a 98% reduction in network bandwidth for coarse locations.
|
|
|
|
|
|
|
|
|
| |
to 1 from 3
This is one step towards reducing hud glitches on region crossing, since the viewer fails to display prims if it receives child full updates before the root prim full update
This commit also introduces a mechanism in LLClientView to stop child attachment updates ever going out before the root one
This is a very temporary mechanism and will be commented out when the next step of the fix (to give root prims higher udpate priority) is committed
This code is a foreport from the equivalent changes in 0.6.9-post-fixes
|
| |
|
|
|
|
|
|
| |
that uuid is already in the scene
this means that we don't perform pointless work
|
|
|
|
|
|
| |
successfully adding an object rather than true, in defiance of its method documentation
This meant that the returns were inconsistent - false would be returned both for various scene object failure conditions (e.g. root part was null) and if the object was successfully added.
|
|
|
|
| |
now only be for simultaneous add/removes of scene presences from the scene.
|
| |
|
|
|
|
|
|
| |
is fully rezzed and all scripts in it are instantiated. This ensures that link
messages will not be lost on rez/region crossing and makes heavily scripted
objects reliable.
|
|
|
|
| |
AttachmentsModule
|
|
|
|
| |
Scene and SceneGraph. This was the only change in this patch to keep it isolated from other recent changes to the same set of files.
|
|
|
|
| |
GetAvatars have been removed to consolidate locking and iteration within SceneGraph. All callers which used these to then iterate over presences have been refactored to instead pass their delegates to Scene.ForEachScenePresence(Action<ScenePresence>).
|
|
|
|
|
|
| |
eliminating option to return the actual list. Callers can now either request a copy of the array as a new List or ask the SceneGraph to call a delegate function on every ScenePresence. Iteration and locking of the ScenePresences now takes place only within the SceneGraph class.
This patch also applies a fix to Combat/CombatModule.cs which had unlocked iteration of the ScenePresences and inconsistent try/catch around the use of those ScenePresences.
|
|
|
|
| |
AttachmentsModule
|
| |
|
|
|
|
| |
need to rationalize method names later
|
| |
|
|
|
|
|
|
|
|
| |
root prim until right clicked (or otherwise updated).
The root cause of this problem was that multiple ObjectUpdates were being sent on attachment which differed enough to confuse the client.
Sometimes these would eliminate each other and sometimes not, depending on whether the scheduler looked at the queued updates.
The solution here is to only schedule the ObjectUpdate once the attachment code has done all it needs to do.
|
|
|
|
| |
Previously, only detach was firing!
|
|\
| |
| |
| | |
This brings presence-refactor up to master again
|
| | |
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes: Undo, T-pose of others on login, modifiedBulletX works again, feet now stand on the ground instead of in the ground, adds checks to CombatModule. Adds: Redo, Land Undo, checks to agentUpdate (so one can not fall off of a region), more vehicle parts. Finishes almost all of LSL (1 function left, 2 events).
Direct flames and kudos to Revolution, please
Signed-off-by: Melanie <melanie@t-data.com>
|
|\ \
| |/
| |
| |
| | |
This was a large, heavily conflicted merge and things MAY have got broken.
Please check!
|
| |
| |
| |
| |
| |
| |
| |
| | |
when a user teleports into a region"
The behavior introduced here is not compatible with SL
This reverts commit b6bee4999c9d238a052022f105069ea4eb85f8f4.
|
| |
| |
| |
| | |
user teleports into a region
|
| |
| |
| |
| | |
Previously, only detach was firing!
|
| |
| |
| |
| | |
itself has been unsuccessful
|
|/
|
|
| |
* Moved a few key inventory access methods from Scene.Inventory to an IInventoryAccessModule module
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
avoiding locking and copying the list each time it is accessed
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This avoids .NET remoting and a managed->unmanaged->managed jump. Overall, a night and day performance difference
* Initialize the LLClientView prim full update queue to the number of prims in the scene for a big performance boost
* Reordered some comparisons on hot code paths for a minor speed boost
* Removed an unnecessary call to the expensive DateTime.Now function (if you *have* to get the current time as opposed to Environment.TickCount, always use DateTime.UtcNow)
* Don't fire the queue empty callback for the Resend category
* Run the outgoing packet handler thread loop for each client synchronously. It seems like more time was being spent doing the execution asynchronously, and it made deadlocks very difficult to track down
* Rewrote some expensive math in LandObject.cs
* Optimized EntityManager to only lock on operations that need locking, and use TryGetValue() where possible
* Only update the attachment database when an object is attached or detached
* Other small misc. performance improvements
|
| |
| |
| |
| | |
re-prioritizing updates
|
| |
| |
| |
| | |
implements a simple distance prioritizer based on initial agent positions. Re-prioritizing and more advanced priority algorithms will follow soon
|
| |
| |
| |
| |
| |
| |
| |
| | |
so it is clear who/what the broadcast is going to each time
* Removed two redundant parameters from SceneObjectPart
* Changed some code in terse update sending that was meant to work with references to work with value types (since Vector3 and Quaternion are structs)
* Committing a preview of a new method for sending object updates efficiently (all commented out for now)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
During the heartbeat loop, Update() is called on every SceneObjectGroup which in turn checks if any SceneObjectPart has changed. For large regions (> 100k prims) this work consumes 20-30% of a CPU even though there are only a few objects updating each frame.
There is only one other reason to check every object on every frame, and that is the case where a script has registered the object with an "at target" listener. We can easily track when an object is registered or unregistered with an AtTarget, so this is not a reason to check every object every heartbeat.
In the attached patch, I have added a dictionary to the scene which tracks the objects which have At Targets. Each heartbeat, the AtTarget() function will be called on every object registered with a listener for that event. Also, I added a dictionary to SceneGraph which stores references to objects which have been queued for updates during the heartbeat. At each heartbeat, Update() is called only on the objects which have generated updates during that beat.
|
| |
| |
| |
| |
| |
| | |
objects. This is about half of the code base reviewed."
This reverts commit e992ca025571a891333a57012c2cd4419b6581e5.
|