| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
bot totals at end.
|
|
|
|
|
|
|
| |
event queue.
I think there is an argument for sending this information to NPCs anyway since in some cases it appears a lot easier to write server-side bots by hooking into such internal events.
However, would need to stop event messages building up on NPC queues if they are never retrieved.
|
|
|
|
| |
client without a queue to include the event message name.
|
|
|
|
|
|
| |
close through Scene.IncomingCloseAgent() and NPCAvatar.Close() rather than directly to Scene.RemoveClient().
This exception was actually harmless since it occurred at the very last stage of the remove client process.
|
| |
|
|
|
|
| |
V2 transfer protocol.
|
|
|
|
|
| |
all the various numbers that have been added to the console output.
Break out EventHistogram from CounterStat.
|
|
|
|
| |
information instead of mixing this with "IO Completion Threads"
|
|
|
|
|
|
| |
This is giving much better results on teleports between simulators over my lan where for some reason there is a pause before the receiving simulator processes UpdateAgent()
At this point, v2 teleports between neighbour and non-neighbour regions on a single simulator and between v2 simulators and between a v1 and v2 simulator
are working okay for me in different scenarios (e.g. simple teleport, teleport back to original quickly and re-teleport, teleport back to neighbour and re-teleport. etc.)
|
| |
|
|\ |
|
| |
| |
| |
| | |
teleport comments.
|
| |
| |
| |
| |
| |
| |
| | |
(A,B,C) where the A root agent is still closed, terminating the connection.
This was occuring because teleport to B did not set DoNotCloseAfterTeleport on A as it was a neighbour (where it isn't set to avoid the issue where the source region doesn't send Close() to regions that are still neighbours (hence not resetting DoNotCloseAfterTeleport).
Fix here is to still set DoNotCloseAfterTeleport if scene presence is still registered as in transit from A
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
This fixes the problem of avatars bouncing when logged in.
Added a little height to the avatar height fudges to eliminate a problem
of feet being in the ground a bit.
|
| | |
| | |
| | |
| | | |
for now. Once the code churn on teleport ends, I can find a better solution
|
| | |
| | |
| | |
| | |
| | | |
Add 'callback' query parameter to managed stats return to return function
form of JSON data.
|
| | |
| | |
| | |
| | |
| | |
| | | |
For unknown reasons, a dynamic function signature cannot have more than 5
parameters. Error message now tells you this fact so you can curse MS and
then go change your function definitions.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Disabled by default. Enable by setting
[Startup]ManagedStatsRemoteFetchURI="Something"
and thereafter "http://ServerHTTPPort/Something/" will return all the managed
stats (equivilent to "show stats all" console command).
Accepts queries "cat=", "cont=" and "stat=" to specify statistic category,
container and statistic names. The special name "all" is the default and returns
all values in that group.
|
| | |
| | |
| | |
| | | |
set terrain heights console command as an example.
|
| |/
| |
| |
| | |
change only affects the editing user's experience. Non-editing users will see nothing different from the current 'slow' result. See comments for the thought process and how the issues surrounding terrain editing, cache, bandwidth, threading, terrain patch reliability and throttling were balanced.
|
| |
| |
| |
| | |
that were accidentally left in
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
back in 2009.
"show modules" is the functional console command that will show currently loaded modules.
Addresses http://opensimulator.org/mantis/view.php?id=6730
|
| |
| |
| |
| |
| |
| | |
add minor details to some log messages, rename a misleading local variable name.
No functional changes.
|
| |
| |
| |
| |
| |
| |
| | |
transfer protocol into v2.
This stops OpenSimulator still trying to teleport the user if they hit cancel on the teleport screen or closed the viewer whilst the protocol was trying to create an agent on the remote region.
Ideally, the code may also attempt to tell the destination simulator that the agent should be removed (accounting for issues where the destination was not responding in the first place, etc.)
|
|/ |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
all") to file OpenSimStats.log every 5 seconds.
This can currently only be activated with the console command "debug stats record start".
Off by default.
Records to file OpenSimStats.log for simulator and RobustStats.log for ROBUST
|
| |
| |
| |
| | |
worker/iocp threadpool numbers
|
|/
|
|
| |
estate bans also, and delete the obsolete config var.
|
| |
|
|
|
|
|
|
| |
not, either via config (SerializeOSDRequests in [Network]) or via the "debug comms set" console command.
For debug purposes to assess what impact this has on network response in a heavy test environment.
|
|
|
|
|
|
| |
forget conditions.
I generally prefer this approach for regression tests because of the complexity of accounting for different threading conditions.
|
|
|
|
| |
category - these are not things one needs to do in normal operation
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
position. That was a complete overkill that is unnecessary at this point.
|
| |/
|/|
| |
| | |
with the console command "debug threadpool set"
|
| |
| |
| |
| | |
If this is an issue, could change log4net config instead to allow re-enablement
|
| |
| |
| |
| |
| |
| |
| | |
as well as max.
Make it clear that we only try to adjust max, and log at warn level if this fails.
Other minor logging cleanup.
|
|\ \ |
|
| | | |
|
|/ /
| |
| |
| | |
thread numbers and immediate post config thread numbers
|
|\ \
| |/ |
|
| |
| |
| |
| | |
the fields get messed up because the transfer is async
|