| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
agents. Child throttles are based on the number of child agents
known to the root and at least 1/4 of the throttle given to
the root.
|
|
|
|
|
|
|
|
|
| |
limits because the only ones used now are the defaults (which are overwritten
by the client throttles anyway). Updated the default rates to correspond to
about 350kbps.
Also added a configuration to disable adaptive throttle. The default
is the previous behavior (no adaptation).
|
|
|
|
|
|
| |
command to look at the entity update priority queue. Added a "name" parameter
to show queues, show pqueues and show throttles to look at data for a specific
user.
|
|
|
|
|
|
|
|
| |
clients. If the sent packets are ack'ed successfully the throttle
will open quickly up to the maximum specified by the client and/or
the sims client throttle.
This still needs a lot of adjustment to get the rates correct.
|
|
|
|
|
|
|
|
|
|
| |
this appears to cause problems with the system timer resolution.
This caused a problem with tokens going into the root throttle as
bursts leading to some starvation.
Also changed EnqueueOutgoing to always queue a packet if there
are already packets in the queue. Ensures consistent ordering
of packet sends.
|
|
|
|
|
|
|
|
|
| |
when client and simulator throttles are set. This algorithm also uses
pre-defined burst rate of 150% of the sustained rate for each of the
throttles.
Removed the "state" queue. The state queue is not a Linden queue and
appeared to be used just to get kill packets sent.
|
|
|
|
|
|
|
|
| |
time to wait to retransmit packets) always maxed out (no retransmissions
for 24 or 48 seconds.
Note that this is going to cause faster (and more) retransmissions. Fix
for dynamic throttling needs to go with this.
|
|
|
|
| |
This reverts commit 585473aade100c3ffeef27e0c8e6b6c8c09d0109.
|
|
|
|
| |
This reverts commit ba202ea9b08b1205de343c65fd209b6cca4cb6bc.
|
| |
|
| |
|
| |
|
|
|
|
| |
This is already being incremented in LLUDPServer.SendPacketFinal for every packet
|
|
|
|
|
|
| |
of bytes.
Byte amounts aren't actually available - this was a misunderstanding of TokenBucket.Content. But raw packet numbers are.
|
|
|
|
|
|
|
|
| |
If an LL 1.23.5 client (and possibly earlier and later) receives an object update after a kill object packet, it leaves the deleted prim in the scene until client relog
This is possible in LLUDPServer if an object update packet is queued but a kill packet sent immediately.
Beyond invasive tracking of kill sending, most expedient solution is to always queue kills, so that they always arrive after updates.
In tests, this doesn't appear to affect performance.
There is probably still an issue present where an update packet might not be acked and then resent after the kill packet.
|
| |
|
|
|
|
| |
This should show the number of bytes sent to the client that it has not yet acknowledged.
|
|
|
|
|
| |
For each agent, this command shows how many packets have been sent/received and how many bytes remain in each of the send queues (resend, land, texture, etc.)
Sometimes useful for diagnostics
|
|
|
|
|
|
|
|
|
| |
Object updates are sent on the task queue. It's possible for an object update to be placed on the client queue before a kill packet comes along.
The kill packet would then be placed on the state queue and possibly get sent before the update
If the update gets sent afterwards then client get undeletable no owner objects until relog
Placing the kills in the task queue should mean that they are received after updates. The kill record prevents subsequent updates getting on the queue
Comments state that updates are sent via the state queue but this isn't true. If this was the case this problem might not exist.
|
|
|
|
|
|
| |
CheckForSignificantMovement()
* Removed a lock on "return m_neighbours.Count" in GetInaccurateNeighborCount(). Dictionary<>.Count by itself does not benefit from locking
|
| |
|
|
|
|
|
|
| |
purposes. Resolves the wrap-around of the 32 bit uint.
* Teravus moved the Environment methods to the Util class
|
| |
|
|
|
|
|
|
|
| |
category to task
* Fixing a bug where the max burst rate for the state category was being set as unlimited, causing connections to child agents to saturate bandwidth
* Upped the example default drip rates to 1000 bytes/sec, the minimum granularity for the token buckets
|
|
|
|
|
|
| |
to use a non-blocking parallel method when operating in async mode
* Minor code readability cleanup
|
|
|
|
| |
LLUDPClient.BackoffRTO()
|
|
|
|
|
|
|
|
|
|
| |
Parallel. This is quite possibly the source of some deadlocking, and at the very least the synchronous version gives better stack traces
* Lock the LLUDPClient RTO math * Add a helper function for backing off the RTO, and follow the optional advice in RFC 2988 to clear existing SRTT and RTTVAR values during a backoff
* Removing the unused PrimitiveBaseShape.SculptImage parameter * Improved performance of SceneObjectPart instantiation * ZeroMesher now drops SculptData bytes like Meshmerizer, to allow the texture data to be GCed * Improved typecasting speed in MySQLLegacyRegionData.BuildShape()
* Improved the instantiation of PrimitiveBaseShape
|
|
|
|
| |
* Implemented section 5.5, exponential backoff of the RTO after a resend
|
|
|
|
| |
setting throttles (normal)
|
|
|
|
|
|
|
| |
empty instead of firing once per empty queue
* Change the OnQueueEmpty firing to use a minimum time until next fire instead of a sleep
* Set OutgoingPacket.TickCount = 0 earlier to avoid extra resends when things are running slowly (inside a profiler, for example)
|
|
|
|
|
|
|
| |
upped it to 30ms
* Removed the unused PacketSent() function
* Switched UnackedPacketCollection from a SortedDictionary to a Dictionary now that the sorting is no longer needed. Big performance improvement for ResendUnacked()
|
|
|
|
|
|
| |
if not it sleeps for a small amount of time. This throttles OnQueueEmpty calls where there is no callback or the callback is doing very little work
* Changed HandleQueueEmpty()'s Monitor.TryEnter() calls to locks. We want to take our time in this function and do all the work necessary, since returning too fast will induce a sleep anyways
|
|
|
|
|
|
| |
per-client back to per-scene
* Testing a fix from Jim to make the cpu usage fix cleaner
|
| |
|
|
|
|
|
|
|
|
| |
can take several seconds, and was blocking up packet handling in the meantime
* Clamp retransmission timeout values between three and 10 seconds
* Log outgoing time for a packet right after it is sent instead of well before
* Loop through the entire UnackedPacketCollection when looking for expired packets
|
|
|
|
|
|
|
| |
tiny amount of time spent in the locks turned into a lot of time when the rest of the LLUDP implementation went lockless
* Changed the timer tracking numbers for each client to not have "memory". It will no longer queue up calls to functions like ResendUnacked
* Reverted Jim's WaitHandle code. Although it was technically more correct, it exhibited the exact same behavior as the old code but spent more cycles. The 20ms has been replaced with the minimum amount of time before a token bucket could receive a drip, and an else { sleep(0); } was added to make sure the outgoing packet handler always yields at least a minimum amount
|
|
|
|
| |
same category
|
|
|
|
| |
handling with an interruptible wait handle
|
|
|
|
|
|
| |
the case where no scripting engine is enabled
* Added TokenBucket.cs to OpenSim, with some fixes for setting a more accurate MaxBurst value and getting a more accurate Content value (by Drip()ing each get)
|
|
|
|
|
|
| |
being done for AgentUpdate packets
* Start LLUDPClients unpaused (this variable is not being used yet)
|
|
|
|
| |
* Changed the outgoing packet handler to use a real function instead of a closure and to track time on a per-client basis instead of a global basis
|
|
|
|
| |
out exactly what is and isn't needed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids .NET remoting and a managed->unmanaged->managed jump. Overall, a night and day performance difference
* Initialize the LLClientView prim full update queue to the number of prims in the scene for a big performance boost
* Reordered some comparisons on hot code paths for a minor speed boost
* Removed an unnecessary call to the expensive DateTime.Now function (if you *have* to get the current time as opposed to Environment.TickCount, always use DateTime.UtcNow)
* Don't fire the queue empty callback for the Resend category
* Run the outgoing packet handler thread loop for each client synchronously. It seems like more time was being spent doing the execution asynchronously, and it made deadlocks very difficult to track down
* Rewrote some expensive math in LandObject.cs
* Optimized EntityManager to only lock on operations that need locking, and use TryGetValue() where possible
* Only update the attachment database when an object is attached or detached
* Other small misc. performance improvements
|
|
|
|
|
|
| |
increase throughput. Apologies to Jim for hacking on your code while it's only halfway done, I'll take responsibility for the manual merge
* Changed LLUDP to use its own MTU value of 1400 instead of the 1200 value pulled from the currently shipped libomv
|
|
|
|
|
|
| |
cloud issues
* Changed the throttling logic to obey the requested client bandwidth limit but also share bandwidth between some of the categories to improve throughput on high prim or heavily trafficked regions
|
|
|
|
| |
more tweaking in the future
|
|
|
|
|
|
| |
* OnQueueEmpty is still called async, but will not be called for a given category if the previous callback for that category is still running. This is the most balanced behavior I could find, and seems to work well
* Added support for the old [ClientStack.LindenUDP] settings (including setting the receive buffer size) and added the new token bucket and global throttle settings
* Added the AssetLoaderEnabled config variable to optionally disable loading assets from XML every startup. This gives a dramatic improvement in startup times for those who don't need the functionality every startup
|
|
|
|
| |
* Crude prioritization hack
|
|
|
|
|
|
|
|
|
| |
property and .Clear() method
* Changed the way the QueueEmpty callback is fired. It will be fired asynchronously as soon as an empty queue is detected (this can happen immediately following a dequeue), and will not be fired again until at least one packet is dequeued from that queue. This will give callbacks advanced notice of an empty queue and prevent callbacks from stacking up while the queue is empty
* Added LLUDPClient.IsConnected checks in several places to prevent unwanted network activity after a client disconnects
* Prevent LLClientView.Close() from being called twice every disconnect
* Removed the packet resend limit and improved the client timeout check
|
|
|
|
|
|
|
|
| |
performance by removing locks, and replace LLUDPClientCollection
* Removed the confusing (and LL-specific) shutdowncircuit parameter from IClientAPI.Close()
* Updated the LLUDP code to only use ClientManager instead of trying to synchronize ClientManager and m_clients
* Remove clients asynchronously since it is a very slow operation (including a 2000ms sleep)
|