| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
reporting login issues.
|
|
|
|
| |
it is needed again. Mantis #5365
|
|
|
|
| |
This reverts commit 585473aade100c3ffeef27e0c8e6b6c8c09d0109.
|
| |
|
|
|
|
| |
race condition happen, and got very similar results to those described in mantis #5365 -- no prims/avie sent back.
|
| |
|
| |
|
|
|
|
| |
"emergency-monitoring on/off"
|
| |
|
|
|
|
|
|
|
|
|
| |
the server settings.
This is in a very crude state, currently.
The LindenUDPModule was renamed LindenUDPInfoModule and moved to OptionalModules
OptionalModules was given a direct reference to OpenSim.Region.ClientStack.LindenUDP so that it can inspect specific LindenUDP settings without having to generalize those to all client views (some of which may have no concept of the settings involved).
This might be ess messy if OpenSim.Region.ClientStack.LindenUDP were a region module instead, like MXP, IRC and NPC
|
|
|
|
| |
Didn't touch the appearance related stuff.
|
|
|
|
| |
Bytes were being wrongly added again on a resend
|
|
|
|
| |
some point.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
If an LL 1.23.5 client (and possibly earlier and later) receives an object update after a kill object packet, it leaves the deleted prim in the scene until client relog
This is possible in LLUDPServer if an object update packet is queued but a kill packet sent immediately.
Beyond invasive tracking of kill sending, most expedient solution is to always queue kills, so that they always arrive after updates.
In tests, this doesn't appear to affect performance.
There is probably still an issue present where an update packet might not be acked and then resent after the kill packet.
|
| | |
|
|\ \
| |/
| |
| |
| | |
Conflicts:
OpenSim/Region/Framework/Scenes/Scene.cs
|
| | |
|
|\ \
| |/
| |
| |
| |
| | |
Conflicts:
OpenSim/Region/CoreModules/Framework/EntityTransfer/EntityTransferModule.cs
OpenSim/Services/Connectors/Simulation/SimulationServiceConnector.cs
|
| |
| |
| |
| |
| |
| | |
Setting this to true avoids a 500ms or so client freeze when the LLUDP server thread is taken up with processing a UseCircuitCode packet synchronously.
Extensive testing on Wright Plaza appeared to show no bad effects and this seems to reduce login lag considerably.
Of course, a lot of login lag is still coming from other sources.
|
|\ \
| |/ |
|
| |
| |
| |
| | |
for diagnostics
|
|/
|
|
|
| |
This reverts commit 21187f459ea2ae590dda4249fa15ebf116d04fe0, reversing
changes made to 8f34e46d7449be1c29419a232a8f7f1e5918f03c.
|
| |
|
|
|
|
|
|
|
|
|
| |
Object updates are sent on the task queue. It's possible for an object update to be placed on the client queue before a kill packet comes along.
The kill packet would then be placed on the state queue and possibly get sent before the update
If the update gets sent afterwards then client get undeletable no owner objects until relog
Placing the kills in the task queue should mean that they are received after updates. The kill record prevents subsequent updates getting on the queue
Comments state that updates are sent via the state queue but this isn't true. If this was the case this problem might not exist.
|
|
|
|
| |
informative
|
|
|
|
|
|
| |
DisableFacelights option to OpenSim.ini to finally kill those immersion-
breaking, silly vanity lights that destroy nighttime RP. Girls, you look
just fine without them. Guys, you too. Thank you. Melanie has left the building.
|
|
|
|
| |
original behavior of avatar update sending and has a simplified set of IClientAPI methods for sending avatar/prim updates
|
|
|
|
| |
I'm seeing the viewer ignore or fail to parse ACKs appended to our zerocoded packets. This should cut down on viewer->sim resend traffic
|
| |
|
|
|
|
| |
printing the hex dump
|
|
|
|
| |
* Handle logout properly. This needed an addition to IClientAPI, because of how the logout packet is currently being handled -- the agent is being removed from the scene before the different event handlers are executed, which is broken.
|
| |
|
| |
|
|
|
|
|
|
| |
packet to be processed asynchronously or not.
* Make several packets not asynchronous (such as AgentUpdate). In theory, all fast returning packet handling methods should not be asynchronous. Ones that wait on an external resource or a long held lock, should be asynchronous.
|
| |
|
| |
|
|
|
|
| |
actual packet size only for oversized packets.
|
|
|
|
| |
inventory packets don't make us barf
|
| |
|
|
|
|
|
|
| |
packet is a mess and shouldn't be used at all (in favor of the event queue message)
* Clean up the way we send AvatarGroupsReply packets, including clamping the group name and group title
|
|
|
|
|
|
|
| |
category to task
* Fixing a bug where the max burst rate for the state category was being set as unlimited, causing connections to child agents to saturate bandwidth
* Upped the example default drip rates to 1000 bytes/sec, the minimum granularity for the token buckets
|
|
|
|
| |
* Prints a warning for any future packet splitting failures
|
|
|
|
| |
* Misc. cleanup in ScenePresence.HandleAgentUpdate()
|
|
|
|
| |
based clients can use UDP server that is inherited from LLUDPServer.
|
|
|
|
| |
always leave a worker thread available for other tasks
|
|
|
|
| |
inside Scene as an implementation detail. This will reduce programming error and make it easier to refactor the avatar vs client vs presence mess later on
|
|
|
|
|
|
|
|
| |
or async to use Scene.ForEachClient() instead of referencing ClientManager directly
* Added a new [Startup] config option called use_async_when_possible to signal how to run operations that could be either sync or async
* Changed Scene.ForEachClient to respect use_async_when_possible
* Fixing a potential deadlock in Parallel.ForEach by locking on a temporary object instead of the enumerator (which may be shared across multiple invocations on ForEach). Thank you diva
|
|
|
|
| |
on the async_packet_handling config option, and added a debug log message when a UseCircuitCode packet is handled
|