| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Instead of processing all incoming attachment scene object concurrently, process them consecutively to eliminate potential overload from this source.
This is a naive implementation because it does not currently account for slow foreign asset services.
Although it may take longer, this approach may also improve attachment visibility for HG avatars
since the scene object is now always added to the scene after receiving assets from the foreign service and not before.
|
|
|
|
|
|
| |
problem diagnosis.
"show threadpool calls" now also returns named (labelled), anonymous (unlabelled) and total call stats.
|
|
|
|
| |
smartthreadpool calls
|
|
|
|
|
|
|
|
| |
server-side preferences due to no e-mail address being sent.
This is to avoid user confusion in the oscc rehearsal as they are often not aware that this fails because no e-mail is set.
Also may be failing in the hypergrid case, though this may also be a config issue.
This is meant as a temporary solution.
|
|
|
|
|
|
| |
This resolves an issue with pCampbot where some bots would occasionally connect with the same UDP source port.
This sometimes led to console messages where bots would report receiving packets multiple times that weren't marked as resends.
DLLs built under windows
|
|
|
|
|
|
| |
At least on Mono 3.2.8 (but not under Windows), one can bind multiple UDP sockets to the same port by default.
Different simulators cannot demultiplex each other's messages, so a set of confusing non-obvious errors arise if this occurs.
This change prevents such multiple binding.
|
|
|
|
|
|
| |
bots in motion.
Previously, adding this behaviour after physics (p) would leave the bot to drift off for ever in its last movement direction.
|
|
|
|
| |
This checks that all the wearable assets and any assets for a given logged in avatar exist in the asset service
|
|
|
|
|
| |
This shows summary wearables information (shape, hair, etc.) for all avatars in the scene or specific information about a given avatar's wearables.
Similar to the existing "attachments show" command.
|
|
|
|
| |
Extends regression tests to test response of adaptive throttles to ack'ed and expired packets.
|
|
|
|
|
|
|
| |
client throttles properly.
In "show throttles", also renames 'total' column to 'actual' to reflect that it is not necessarily the throttles requested for/by the client.
Also fills out 'target' in non-adapative mode to the actual throttle requested for/by the client.
|
| |
|
| |
|
|
|
|
|
|
| |
settings.
As part of this also refactors code to put all throttle asserts in a single regression test method
|
|
|
|
| |
client has made no AgentUpdate requests (as is the case with agents that have only even been child) rather than throwing an exception
|
|
|
|
| |
Scene.cs (almost five years ago!)
|
|
|
|
|
|
|
| |
client being added to the manager without IClientAPI.SceneAgent being set.
This is done by adjusting the order of code so that SceneAgent will always be set before adding the client.
Various parts of the code (rightly) assume that a a client registered to the manager will always have a SceneAgent set no matter what.
|
|
|
|
|
|
|
|
|
| |
LLUDP client stack rather than queueing internally within LLClientView.
When an HG avatar enters a scene, it delays processing of entity updates. Could be crowding out by other updates or something else.
This delay in ones own av mvmt updates results in mvmt lag experienced on the client. Avoiding the internal LLClientView for these packets appears to resolve this issue.
Appears most noticeably for avatars with attachments, though has also been seen on those without sometimes. Hasn't been observed for non-HG avatars in general.
Will be investigating exactly what the problem is, at which point there will be a more permanent solution.
|
|
|
|
| |
This allows one to set the requested throttle (which normally comes from the client) as opposed to the max.
|
|
|
|
|
| |
On server, scene-throttle-max becomes max-scene-throttle and likewise max-new-client-throttle
On clients, throttle-max becomes max
|
|
|
|
|
|
| |
client throttle to be set separately from existing clients.
"debug lludp throttles get/set throttle-max" now only gets and sets current max client throttles
|
|
|
|
|
|
| |
to mirror "debug lludp set"
Information is also available in "show server throttles" but that's more for non-debug info rather than attempting to get and set parameters on the fly for debug purposes.
|
| |
|
|
|
|
|
|
|
|
| |
streaming xml basis rather than loading it all into memory via XmlDocument.
This is because objects with lots of parts can have a lot of xml to load into memory, and this has been seen to have a noticeable performance impact.
Whereas streaming has been seen to reduce the impact in normal serialization.
Implmentation is messy but I couldn't see a better way of doing it when you can't assume that you know the exact structure of the input XML.
|
|
|
|
| |
HGAssetMapper.Post() object asset rewriting,
|
|
|
|
| |
instead of letting it terminate the simulator...
|
|
|
|
|
| |
To do this required GetMesh to be converted to a BaseStreamHandler
Unlike GetTexture connector, no redirect URL functionality yet (this wasn't present in the first place).
|
| |
|
|
|
|
|
|
|
|
|
| |
entering the scene that isn't initially logging on. This will execute tasks consecutively rather than concurrently.
This has two aims
1) Reduce initial teleport failures when a foreign Hypergrid user enters a region by not holding up the teleport for attachment rez (this can be particularly costly when HG gets all assets in the object graph.
2) Reduce server load that may impact other simulator activities.
This complements existing JobEngine options that perform initial login attachment rez and appearance send in consecutive tasks.
|
| |
|
|
|
|
|
|
| |
the user that's the problem rather than simply exiting silently.
Also exit with Environment.Exit(), not by aborting the thread.
|
|
|
|
| |
instantiated directly for potentially handling some capabilities directly in services with HG active
|
|
|
|
|
|
|
|
|
|
| |
throttles would cause client throttles to be lower than expected when total requests exceeded the scene limit.
This was because specifying a max client throttle would always request the max from the parent server throttle, no matter the actual total requests on the client throttle.
This would lead to a lower server multiplier than expected.
This change also adds a 'target' column to the "show throttles" output that shows the target rate (as set by client) if adaptive throttles is active.
This commit also re-adds the functionality lost in recent 5c1a1458 to set a max client throttle when adaptive is active.
This commit also adds TestClientThrottlePerClientAndRegionLimited and TestClientThrottleAdaptiveNoLimit regression tests
|
|
|
|
| |
make code analysis easier. No functional change.
|
|
|
|
|
| |
This only had one child, which is the 'adaptive' token bucket.
So from testing and currently analysis, we can use that bucket directly which simplifies the code.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
remove_me parameter (which looks like it might still be potentially useful for logging)
|
|
|
|
| |
Renames to TestSingleClientThrottleRegionLimited()
|
|
|
|
| |
behaviour of throttles where a region-wide total outbound limit is in place.
|
| |
|
|
|
|
| |
a max client total limit is enforced server-side
|
|
|
|
|
|
| |
package rather than some in OpenSim.Tests.Common.Mock
the separate mock package was not useful and was just another using line to always add
|
| |
|
| |
|
| |
|
|
|
|
| |
OpenSim.Tests.Common.ClientStackHelpers
|