| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
(General, not physics). Makes flying not feel as stiff.
|
|\ |
|
| |
| |
| |
| | |
other than the object currently sat on
|
|/ |
|
| |
|
|
|
|
| |
work.
|
|
|
|
|
|
|
|
|
|
|
| |
interference between incoming packets.
On Windows, concurrent multi-threaded processing of inbound UDP somehow allows different data input processing to interfere with each other.
Possibly the endpoint reference is being switched, though I don't yet know the mechanism. Not seen on Mono.
Also resolveable by setting RecyclePackets = false or RecycleBaseUDPPackets = false in [PacketPool]
Or async_packet_handling = false in [ClientStack.LindenUDP]
For now, will simply disable this particular pooling though will revisit this issue.
In response to http://opensimulator.org/mantis/view.php?id=6468
|
|
|
|
| |
process a packet
|
|
|
|
| |
automatically turns off any logging enabled between tests
|
|
|
|
|
|
| |
delta over time.
The chief motivation for this is to be able to tell whether there's any impact on incoming packet processing from enabling extra packet pooling.
|
|
|
|
|
|
|
| |
handling thread.
This prevents a slow grid information network call from holding up the main packet handling thread.
There's no obvious race condition reason for not doing this asynchronously.
|
|
|
|
|
|
| |
udp packet handling thread.
There's no obvious race condition reason for doing this on the main packet handling thread.
|
|
|
|
|
|
| |
LLClientView directly.
This releases the inbound packet handling thread marginally quicker and is more consistent with the other async packet handling
|
|
|
|
| |
lifted up into LLUDPServer and be distiguished by scene name
|
|
|
|
| |
Also puts some packet processing counts in a container named after the scene so that stats can be collected from more than one scene.
|
| |
|
| |
|
|
|
|
| |
grepping for remaining uses
|
| |
|
| |
|
|
|
|
|
|
|
| |
rather than synchronously.
This is to avoid the entire scene loop being held up when the group service is slow to respond.
There's no obvious reason for these queries to be sync rather than async.
|
|
|
|
|
|
|
| |
Viewer 3 will discard such a message if the chat message owner does not match the avatar.
We were filling the ownerID with the primID, so this never matched, hence viewer 3 did not see any script error messages.
This commit fills the ownerID in with the prim ownerID so the script owner will receive script error messages.
This does not affect viewer 1 and associated viewers which continue to process script errors as normal.
|
|
|
|
|
|
| |
running via the "debug lludp pool <on|off>" console command. For debug purposes.
This does not currently apply to the higher LLUDP packetpool.
|
|
|
|
|
|
|
|
| |
they are enabled. Add count stats for existing LLUDP pool.
This introduces a pull stat type in addition to the push stat type.
A pull stat takes a method on construction which knows how to update the stat on request.
In this way, special interfaces for pull stat collection are not necessary.
|
|
|
|
| |
commit (1de80c)
|
|
|
|
|
|
| |
which there are 10 a second) rather than constructing a new one every time.
We can do this because AgentUpdate packets are handled synchronously.
|
|
|
|
| |
one we pool atm, rather than attempting to return all incoming packets.
|
|
|
|
| |
churn
|
|
|
|
| |
ignoring it.
|
|
|
|
|
|
|
|
| |
UDP data.
Even when an avatar is standing still, it's sending in a constant stream of AgentUpdate packets that the client creates new UDPPacketBuffer objects to handle.
This option pools those objects. This reduces memory churn.
Currently off by default. Works but the scope can be expanded.
|
|
|
|
|
|
|
| |
console for debug processes.
This is controlled via the "debug lludp start <in|out|all>" and "debug lludp stop <in|out|all>" region console commands.
The command "debug lludp status" will show current status.
|
| |
|
|
|
|
|
|
| |
caps request" message for now.
I think this is more useful right now since it tells us if the viewer requested a seed caps at all in various scenarios (such as when teleporting to a new region).
|
| |
|
|
|
|
| |
estate method is received from the client.
|
|
|
|
|
|
| |
Improve/ObjectUpdate packet out messages when debug is turned on.
Practical effect is probably none.
|
| |
|
|
|
|
| |
viewers.
|
|
|
|
| |
packetpool stats.
|
|
|
|
| |
Packetpool code.
|
|
|
|
|
| |
These were neither being returned or in many places reused.
Getting packets from a pool rather than deallocating and reallocating reduces memory churn which in turn reduces garbage collection time and frequency.
|
|
|
|
|
|
| |
This allows different categories of stats to be shown, with options to list categories or show all stats.
Currently categories are scene and simulator and only a very few stats are currently registered via this mechanism.
This commit also adds percentage stats for packets and blocks reused from the packet pool.
|
|
|
|
|
|
| |
prevent an inactive connection being left behind if the user closes the viewer whilst the connection is being established.
This should remove the need to run the console command "kick user --force" when these connections are left around.
|
|
|
|
| |
easier for other code to use (e.g. LSL_Api) without having to reference OpenSim.Data just for this.
|
|
|
|
|
|
|
|
|
|
|
| |
bucket slot for RLV, notify the viewer about inventory folder updates.
The viewer would not see the folder move without this, either on accept or decline.
This commit also updates the TaskInventoryOffered message to better conform with the data LL uses
Changes are, agentID is prim owner rather than prim id, agent name is now simply object name rather than name with owner detail,
message is just folder name in single quotes, message is not timestamped.
However, folder is not renamed "still #RLV/~<name>". Long term solution is probably not to do these operations server-side.
Notes will be added to http://opensimulator.org/mantis/view.php?id=6311
|
|
|
|
|
|
|
| |
in Scene.
This is to resolve previous build break.
This unnecessarily but harmlessly reads and sets the parameter multiple times - scene was doing the same thing.
|
|
|
|
|
|
|
| |
OpenSim.Region.Clientstack.Linden.UDP
This is to allow it to use OpenSim.Framework.Monitoring in the future.
This is also a better location since the packet pool is linden udp specific
|
| |
|
|
|
|
| |
even test compiled.
|
|
|
|
|
|
|
|
|
| |
race condition checks.
This is to allow a second attempt to remove an avatar even if "show connections" shows them as already inactive (i.e. close has already been attempted once).
You should only attempt --force if a normal kick fails.
This is partly for diagnostics as we have seen some connections occasionally remain on lbsa plaza even if they are registered as inactive.
This is not a permanent solution and may not work anyway - the ultimate solution is to stop this problem from happening in the first place.
|