| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
regionhandle and saves it to to X,Y vars in the OpenSim.Framework.Location object was inverting the X and Y resulting in X and Y confusion. The test also used 256x256 in the uint,uint constructor so it was unable to determine if the X and Y components swapped. I don't expect much upheaval from this commit, not a lot of features were using the ulong Location object constructor. The database never stores the ulong regionhandle... the prims are loaded by region Guid. LLUDPServer used it to determine regions that it handled in a service definition where there was simply a X == X test which has the same logical result un-switched as it did switched. Again, thanks LibOMV for the regionhandle code.
|
|
|
|
|
|
|
| |
add this to the StatsManager
This reflects the actual use of this stat - it hasn't recorded general exceptions for some time.
Make the sim extra stats collector draw the data from the stats manager rather than maintaing this data itself.
|
|
|
|
|
|
| |
join/drop appropriately, invitechatboxes.
The major departure from flotsam is to send only one message per destination region, as opposed to one message per group member. This reduces messaging considerably in large groups that have clusters of members in certain regions.
|
|
|
|
| |
1sec). Group chat is going over the EQ... Hopefully this won't increase CPU when there's nothing going on, but we need to watch for that.
|
|
|
|
| |
week's SIMULATOR/0.1 protocol for now.
|
|
|
|
|
|
|
|
| |
the destination is being checked)
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...
|
|
|
|
| |
than it should have been (though internal use was correct)
|
| |
|
|
|
|
|
|
| |
well as the average.
This is somewhat cryptic at the moment, need to improve documentation.
|
|
|
|
| |
lkalif for telling me how to route the information. The viewer effect is under the distance filter, so only avatars with cameras < 10m away see the beams.
|
|
|
|
|
|
| |
- The existing event to scene has been split into 2: OnAgentUpdate and OnAgentCameraUpdate, to better reflect the two types of updates that the viewer sends. We can run one without the other, which is what happens when the avie is still but the user is camming around
- Added thresholds (as opposed to equality) to determine whether the update is significant or not. I thin these thresholds are ok, but we can play with them later
- Ignore updates of HeadRotation, which were problematic and aren't being used up stream
|
| |
|
|
|
|
| |
need to dequeue and enqueue items every 1sec.
|
|
|
|
| |
these requests actively on the processing queue if it seems they're not ready.
|
|
|
|
|
|
| |
requests timeout in 60 secs.
There's plenty of room for improvement in handling the EQs. Some other time...
|
|
|
|
| |
This reverts commit 52dc7b2a96a28798d55d07d79d003ce5e3d35216.
|
|
|
|
| |
This reverts commit 5495df74436d6c0039a1500d979a964b003abfdf.
|
|
|
|
|
|
| |
count >0 is not the smartest move...""
This reverts commit 71278919575b0e0222cdbe3c0cefa5919f9a75bc.
|
|
|
|
| |
This reverts commit fda91d93dad1fa6f901e8db5829aa8b70477c97e.
|
|
|
|
|
|
|
|
| |
successfully tested, and I'm merging back those changes, which proved to
be good.
Revert "Revert "Cleared up much confusion in PollServiceRequestManager. Here's the history:""
This reverts commit fa2370b32ee57a07f27501152c3c705a883b13d8.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
where the q parmater is ignored and everyghig is always placed on m_lowQueue.
No actual impact presently since nothing ends up calling EnqueueHigh()
|
|/ |
|
|
|
|
|
|
| |
history:"
This reverts commit e46459ef21e1ee5ceaeca70365a7c881d33b09ce.
|
|
|
|
| |
This reverts commit 0f5b616fb0ebf9207b3cc81771622ed1290ea7d6.
|
|
|
|
|
|
| |
not the smartest move..."
This reverts commit f4317dc26d670c853d0ea64b401b00f718f09474.
|
|
|
|
| |
This reverts commit af792bc7f2504e9ccf1c8ae7568919785dc397c9.
|
|
|
|
| |
This reverts commit 1d3deda10cf85abd68a5f904d6698ae597a67cc0.
|
|
|
|
|
|
| |
PollServiceRequestManager."
This reverts commit 5f95f4d78e8c7d17b8ba866907156fe6d4444c04.
|
|
|
|
|
|
| |
the purpose of BlockingQueues. Trying this, to see the effect on CPU."
This reverts commit 5232ab0496eb4fe6903a0fd328974ac69df29ad8.
|
|
|
|
| |
purpose of BlockingQueues. Trying this, to see the effect on CPU.
|
|
|
|
| |
PollServiceRequestManager.
|
| |
|
| |
|
|
|
|
| |
smartest move...
|
| |
|
|
|
|
|
|
| |
When Melanie added the web fetch inventory throttle to core, she made the long poll requests (EQs) effectively be handled on an active loop. All those requests, if they existed, were being constantly dequeued, checked for events (which most often they didn't have), and requeued again. This was an active loop thread on a 100ms cycle!
This fixes the issue. Now the inventory requests, if they aren't ready to be served, are placed directly back in the queue, but the long poll requests aren't placed there until there are events ready to be sent or timeout has been reached.
This puts the LongPollServiceWatcherThread back to 1sec cycle, as it was before.
|
|
|
|
| |
from BlockingQueue.Contains(), Count() and GetQueueArray()
|
|
|
|
|
|
|
|
|
|
|
| |
block will trigger a Monitor.Wait() to exit, so avoid some locks that don't actually affect the state of the internal queues in the BlockingQueue class.""
This reverts commit 21a09ad3ad42b24bce4fc04c6bcd6f7d9a80af08.
After more analysis and discussion, it is apparant that the Count(), Contains() and GetQueueArray() cannot be made thread-safe anyway without external locking
And this change appears to have a positive impact on performance.
I still believe that Monitor.Exit() will not release any thread for Monitor.Wait(), as per http://msdn.microsoft.com/en-gb/library/vstudio/system.threading.monitor.exit%28v=vs.100%29.aspx
so this should in theory make no difference, though mono implementation issues could possibly be coming into play.
|
|
|
|
|
|
|
|
|
|
|
|
| |
will trigger a Monitor.Wait() to exit, so avoid some locks that don't actually affect the state of the internal queues in the BlockingQueue class."
This reverts commit 42e2a0d66eaa7e322bce817e9e2cc9a288de167b
Reverting because unfortunately this introduces race conditions because Contains(), Count() and GetQueueArray() may now end up returning the wrong result if another thread performs a simultaneous update on m_queue.
Code such as PollServiceRequestManager.Stop() relies on the count being correct otherwise a request may be lost.
Also, though some of the internal queue methods do not affect state, they are not thread-safe and could return the wrong result generating the same problem
lock() generates Monitor.Enter() and Monitor.Exit() under the covers. Monitor.Exit() does not cause Monitor.Wait() to exist, only Pulse() and PulseAll() will do this
Reverted with agreement.
|
|
|
|
| |
trigger a Monitor.Wait() to exit, so avoid some locks that don't actually affect the state of the internal queues in the BlockingQueue class.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
17:14:28 - [APPLICATION]:
APPLICATION EXCEPTION DETECTED: System.UnhandledExceptionEventArgs
Exception: System.InvalidOperationException: Operation is not valid due to the current state of the object
at System.Collections.Generic.Queue`1[OpenSim.Region.ClientStack.Linden.WebFetchInvDescModule+aPollRequest].Peek () [0x00011] in /root/install/mono-3.1.0/mono/mcs/class/System/System.Collections.Generic/Queue.cs:158
at System.Collections.Generic.Queue`1[OpenSim.Region.ClientStack.Linden.WebFetchInvDescModule+aPollRequest].Dequeue () [0x00000] in /root/install/mono-3.1.0/mono/mcs/class/System/System.Collections.Generic/Queue.cs:140
at OpenSim.Framework.DoubleQueue`1[OpenSim.Region.ClientStack.Linden.WebFetchInvDescModule+aPollRequest].Dequeue (TimeSpan wait, OpenSim.Region.ClientStack.Linden.aPollRequest& res) [0x0004e] in /home/avacon/opensim_2013-07-14/OpenSim/Framework/Util.cs:2297
|
| | |
|
|/
|
|
|
|
|
| |
handlers.
This adds explicit cap poll handler supporting to the Caps classes rather than relying on callers to do the complicated coding.
Other refactoring was required to get logic into the right places to support this.
|
|
|
|
|
| |
Add a GetStatsAsOSDMap method to StatsManager which allows the filtered
fetching of stats for eventual returning over the internets.
|
|
|
|
|
|
|
|
| |
returned 499 and the exception message in the http_response event rather than the actual response code and body.
This was a regression since commit 831e4c3 (Thu Apr 4 00:36:15 2013)
This commit also adds a regression test for this case, though this currently only works with Mono
This aims to address http://opensimulator.org/mantis/view.php?id=6704
|
|
|
|
| |
requesting.
|
| |
|
| |
|