| Commit message (Collapse) | Author | Files | Lines |
|
|
|
Adds a DoubleLocklessQueue and uses it for the outgoing buckets. Added
a flag value to the Throttle Type (again) because although it's hacky, it's
the best of a bad bunch to get the message through the UDP stack to where it's
needed.
|
|
|
|
|
|
This reverts commit 4d92c6b39f3ebb7a27517493b66d097d9d9d23dd.
|
|
removed by intel folks (?)( should it be used or removed ??)
|
|
|
|
|
|
work.
|
|
* Added suser(bad client) DOS protection by limiting the max cacheitems to the maximum sane amount.
* Prevents potential numerous loops from running amok and index errors if the client purposely provides bad cache info.
* If the XBakes service wasn't running, the SetAvatarAppearance routine would crash when contacting the XBakes service even though it was in a Try/Catch for the appropriate error type. It only properly error handles with the type Exception :(. (commented on that because it's unusual)
|
|
XBakes Module and service for full functionality. Previous no-cache functionality works without the service and module. In some ways, I would have been happier not putting an AssetBase in WearableCacheItem.. but turns out it was probably unavoidable. No additional locks, yay.
|
|
Cached Bakes.
|
|
|
|
more messy)
|
|
cache without actually loading it. Make use limited use of it in
avatarfactory textures check. Also on llclientview
HandleAgentTextureCached that now should work. Other asset cache modules
for now will return false, so are broken. baked textures logic
still unchanged. *UNTESTED*
|
|
|
|
they don't do what i was looking for.
|
|
|
|
|
|
since at least for now seems good enought
|
|
situations I can't demonstrate that it's better then just letting the client request what it needs in terms of responsiveness of the mesh in the scene yet.
|
|
* Last step is to flip the throttle distribution.
|
|
delta over time.
The chief motivation for this is to be able to tell whether there's any impact on incoming packet processing from enabling extra packet pooling.
|
|
master.
This reverts commit dfac269032300872c4d0dc507f4f9062d102b0f4, reversing
changes made to 619c39e5144f15aca129d6d999bcc5c34133ee64.
|
|
handling thread.
This prevents a slow grid information network call from holding up the main packet handling thread.
There's no obvious race condition reason for not doing this asynchronously.
|
|
udp packet handling thread.
There's no obvious race condition reason for doing this on the main packet handling thread.
|
|
LLClientView directly.
This releases the inbound packet handling thread marginally quicker and is more consistent with the other async packet handling
|
|
lifted up into LLUDPServer and be distiguished by scene name
|
|
Also puts some packet processing counts in a container named after the scene so that stats can be collected from more than one scene.
|
|
|
|
|
|
grepping for remaining uses
|
|
|
|
|
|
* This still has the image throttler in it.. as is... so it's not suitable for live yet.... The throttler keeps track of the task throttle but doesn't balance the UDP throttle yet.
|
|
to be OK with me specifying allowing 1 oversized image per 70,000b/sec with at least one. Try it out, start with a low bandwidth setting and then, set your bandwidth setting middle/high and see the difference.
Tested with Two Clients on a region with 1800 textures all visible at once.
|
|
PollServiceTextureEventArgs. Each poll service having it's own throttle member is more consistent with the model then the region module keeping track of all of them globally and better for locking too. The Poll Services object is not set static to handle multiple nearby regions on the same simulator.
Next step is hooking it up to HasEvents
|
|
EventManager, so that modules can know when throttles are updated. The event contains no client specific data to preserve the possibility of 'multiple clients' and you must still call ControllingClient.GetThrottlesPacked(f) to see what the throttles actually are once the event fires. Hook EventManager.OnUpdateThrottle to GetTextureModule.
|
|
rather than synchronously.
This is to avoid the entire scene loop being held up when the group service is slow to respond.
There's no obvious reason for these queries to be sync rather than async.
|
|
Viewer 3 will discard such a message if the chat message owner does not match the avatar.
We were filling the ownerID with the primID, so this never matched, hence viewer 3 did not see any script error messages.
This commit fills the ownerID in with the prim ownerID so the script owner will receive script error messages.
This does not affect viewer 1 and associated viewers which continue to process script errors as normal.
|
|
running via the "debug lludp pool <on|off>" console command. For debug purposes.
This does not currently apply to the higher LLUDP packetpool.
|
|
they are enabled. Add count stats for existing LLUDP pool.
This introduces a pull stat type in addition to the push stat type.
A pull stat takes a method on construction which knows how to update the stat on request.
In this way, special interfaces for pull stat collection are not necessary.
|
|
|
|
commit (1de80c)
|
|
which there are 10 a second) rather than constructing a new one every time.
We can do this because AgentUpdate packets are handled synchronously.
|
|
one we pool atm, rather than attempting to return all incoming packets.
|
|
churn
|
|
|
|
ignoring it.
|
|
UDP data.
Even when an avatar is standing still, it's sending in a constant stream of AgentUpdate packets that the client creates new UDPPacketBuffer objects to handle.
This option pools those objects. This reduces memory churn.
Currently off by default. Works but the scope can be expanded.
|