| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
CapabilitiesModule was being instantiated twice (damn Mono.Addins).
|
| |
|
|
|
|
| |
GetTexture.
|
| |
|
| |
|
|
|
|
|
|
|
| |
registers/unregisters capabilities and a specific bunch of capability implementations in Linden space called BunchOfCaps.
Renamed a few methods that were misnomers.
Compiles but doesn't work.
|
|
|
|
| |
NOTHING OF THIS WORKS. Compiles.
|
| |
|
|
|
|
| |
GetTexture handler. The region module in Linden space uses it. WARNING: nothing of this works yet, it just compiles.
|
| |
|
| |
|
|
|
|
| |
quite some time.
|
|\ |
|
| |
| |
| |
| | |
entity updates in LLClientView.cs
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
agents. Child throttles are based on the number of child agents
known to the root and at least 1/4 of the throttle given to
the root.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
limits because the only ones used now are the defaults (which are overwritten
by the client throttles anyway). Updated the default rates to correspond to
about 350kbps.
Also added a configuration to disable adaptive throttle. The default
is the previous behavior (no adaptation).
|
| |
| |
| |
| | |
or about 15 packets per second.
|
| |
| |
| |
| |
| |
| | |
command to look at the entity update priority queue. Added a "name" parameter
to show queues, show pqueues and show throttles to look at data for a specific
user.
|
|\ \
| | |
| | |
| | | |
queuetest
|
| | |
| | |
| | |
| | | |
and Removes in that order.
|
|/ /
| |
| |
| | |
hiearchy. A few other cosmetic changes.
|
| |
| |
| |
| | |
an acknowledgement from the network. This prevents RTT and throttles from being updated as they would when an ACK is actually received. Also fixed stats logging for unacked bytes and resent packets in this case.
|
| |
| |
| |
| | |
ResendPrimUpdates, it is removed from the UnackedPacketCollection.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
per Melanie's very good suggestion. The immediate queue is
serviced completely before all others, making it a very good
place to put avatar updates & attachments.
Moved the priority queue out of the LLUDP directory and
into the framework. It is now a fairly general utility.
|
| |
| |
| |
| |
| |
| |
| |
| | |
clients. If the sent packets are ack'ed successfully the throttle
will open quickly up to the maximum specified by the client and/or
the sims client throttle.
This still needs a lot of adjustment to get the rates correct.
|
| |
| |
| |
| | |
mechanism as the entity update queues.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Often, by the time the UDPServer realizes that an entity update packet
has not been acknowledged, there is a newer update for the same entity
already queued up or there is a higher priority update that should be
sent first. This patch eliminates 1:1 packet resends for unacked entity
update packets. Insteawd, unacked update packets are decomposed into the
original entity updates and those updates are placed back into the
priority queues based on their new priority but the original update
timestamp. This will generally place them at the head of the line to be
put back on the wire as a new outgoing packet but prevents the resend
queue from filling up with multiple stale updates for the same entity.
This new approach takes advantage of the UDP nature of the Linden protocol
in that the intent of a reliable update packet is that if it goes
unacknowledge, SOMETHING has to happen to get the update to the client.
We are simply making sure that we are resending current object state
rather than stale object state.
Additionally, this patch includes a generalized callback mechanism so
that any caller can specify their own method to call when a packet
expires without being acknowledged. We use this mechanism to requeue
update packets and otherwise use the UDPServer default method of just
putting expired packets in the resend queue.
|
|
|
|
|
|
|
|
|
|
| |
this appears to cause problems with the system timer resolution.
This caused a problem with tokens going into the root throttle as
bursts leading to some starvation.
Also changed EnqueueOutgoing to always queue a packet if there
are already packets in the queue. Ensures consistent ordering
of packet sends.
|
|\
| |
| |
| |
| | |
Conflicts:
OpenSim/Region/ClientStack/LindenUDP/LLClientView.cs
|
| | |
|
| |
| |
| |
| |
| |
| | |
types of property updates to be specified. Not sure if one form
of property update should supercede another. But for now the old
OpenSim behavior is preserved by sending both.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
to the entity update queue. The number of property packets can
become significant when selecting/deselecting large numbers of
objects.
This is experimental code.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when client and simulator throttles are set. This algorithm also uses
pre-defined burst rate of 150% of the sustained rate for each of the
throttles.
Removed the "state" queue. The state queue is not a Linden queue and
appeared to be used just to get kill packets sent.
|
| |
| |
| |
| | |
inventory
|
| | |
|
| |
| |
| |
| |
| |
| | |
types of property updates to be specified. Not sure if one form
of property update should supercede another. But for now the old
OpenSim behavior is preserved by sending both.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
to the entity update queue. The number of property packets can
become significant when selecting/deselecting large numbers of
objects.
This is experimental code.
|
|/
|
|
|
|
|
|
|
| |
when client and simulator throttles are set. This algorithm also uses
pre-defined burst rate of 150% of the sustained rate for each of the
throttles.
Removed the "state" queue. The state queue is not a Linden queue and
appeared to be used just to get kill packets sent.
|
|\
| |
| |
| |
| |
| |
| |
| | |
queuetest
Conflicts:
OpenSim/Region/ClientStack/LindenUDP/LLClientView.cs
OpenSim/Region/Framework/Scenes/Prioritizer.cs
|
| |
| |
| |
| |
| |
| |
| |
| | |
time to wait to retransmit packets) always maxed out (no retransmissions
for 24 or 48 seconds.
Note that this is going to cause faster (and more) retransmissions. Fix
for dynamic throttling needs to go with this.
|
| |
| |
| |
| |
| |
| |
| | |
improved networking performance.
Reprioritization algorithms need to be ported still. One is
in place.
|
| | |
|
| |
| |
| |
| | |
is big enough.
|
| |
| |
| |
| |
| |
| |
| |
| | |
time to wait to retransmit packets) always maxed out (no retransmissions
for 24 or 48 seconds.
Note that this is going to cause faster (and more) retransmissions. Fix
for dynamic throttling needs to go with this.
|
| |
| |
| |
| |
| |
| |
| | |
improved networking performance.
Reprioritization algorithms need to be ported still. One is
in place.
|