| Commit message (Collapse) | Author | Files | Lines |
|
specify number of concurrent requests on a service.
|
|
belong there.....)
* Add an IsBlocked(string Key) method so it can be used more generically. (think.. if we want to rate limit login failures, we could have a call in the Login Service to IsBlocked(uuid.ToString()) and ignore the connection if it returns true, if IsBlocked returns false, we could run the login information and if the login fails we could run the Process method to count the login failures.
|
|
* Break out common BasicDOSProtector code into separate class.
|
|
HTTP Server handlers. XMLRPC Handler, GenericHttpHandler and <Various>StreamHandler
* Applied the XmlRpcBasicDOSProtector.cs to the login service as both an example, and good practice.
* Applied the BaseStreamHandlerBasicDOSProtector.cs to the friends service as an example of the DOS Protector on StreamHandlers
* Added CircularBuffer, used for CPU and Memory friendly rate monitoring.
* DosProtector has 2 states, 1. Just Check for blocked users and check general velocity, 2. Track velocity per user, It only jumps to 2 if it's getting a lot of requests, and state 1 is about as resource friendly as if it wasn't even there.
|
|
it easier to write WebSocket service code that is resistant to Denial of Service attacks.
|
|
|
|
(cherry picked from commit 93abcde69043b175071e0bb752538d9730433f1d)
|
|
|
|
worker/iocp threadpool numbers
|
|
|
|
not, either via config (SerializeOSDRequests in [Network]) or via the "debug comms set" console command.
For debug purposes to assess what impact this has on network response in a heavy test environment.
|
|
category - these are not things one needs to do in normal operation
|
|
with the console command "debug threadpool set"
|
|
not hold.
Not yet in use.
|
|
1sec). Group chat is going over the EQ... Hopefully this won't increase CPU when there's nothing going on, but we need to watch for that.
|
|
week's SIMULATOR/0.1 protocol for now.
|
|
|
|
need to dequeue and enqueue items every 1sec.
|
|
these requests actively on the processing queue if it seems they're not ready.
|
|
requests timeout in 60 secs.
There's plenty of room for improvement in handling the EQs. Some other time...
|
|
count >0 is not the smartest move...""
This reverts commit 71278919575b0e0222cdbe3c0cefa5919f9a75bc.
|
|
successfully tested, and I'm merging back those changes, which proved to
be good.
Revert "Revert "Cleared up much confusion in PollServiceRequestManager. Here's the history:""
This reverts commit fa2370b32ee57a07f27501152c3c705a883b13d8.
|
|
history:"
This reverts commit e46459ef21e1ee5ceaeca70365a7c881d33b09ce.
|
|
not the smartest move..."
This reverts commit f4317dc26d670c853d0ea64b401b00f718f09474.
|
|
PollServiceRequestManager."
This reverts commit 5f95f4d78e8c7d17b8ba866907156fe6d4444c04.
|
|
the purpose of BlockingQueues. Trying this, to see the effect on CPU."
This reverts commit 5232ab0496eb4fe6903a0fd328974ac69df29ad8.
|
|
purpose of BlockingQueues. Trying this, to see the effect on CPU.
|
|
PollServiceRequestManager.
|
|
smartest move...
|
|
When Melanie added the web fetch inventory throttle to core, she made the long poll requests (EQs) effectively be handled on an active loop. All those requests, if they existed, were being constantly dequeued, checked for events (which most often they didn't have), and requeued again. This was an active loop thread on a 100ms cycle!
This fixes the issue. Now the inventory requests, if they aren't ready to be served, are placed directly back in the queue, but the long poll requests aren't placed there until there are events ready to be sent or timeout has been reached.
This puts the LongPollServiceWatcherThread back to 1sec cycle, as it was before.
|
|
handlers.
This adds explicit cap poll handler supporting to the Caps classes rather than relying on callers to do the complicated coding.
Other refactoring was required to get logic into the right places to support this.
|
|
|
|
This happens in HEAD handlers.
|
|
to print various counts of capability invocation by user and by cap
This currently prints caps requests received and handled, so that overload of received compared to handled or deadlock can be detected.
This involves making BaseStreamHandler and BaseOutputStream record the ints, which means inheritors should subclass ProcessRequest() instead of Handle()
However, existing inheriting classes overriding Handle() will still work, albeit without stats recording.
"show caps" becomes "show caps list" to disambiguate between show caps commands
|
|
|
|
server.network.HTTPRequestsMade in "show stats all"
|
|
httpserver.<port>.IncomingHTTPRequestsProcessed stat
|
|
threadpool to the ServerBase rather than being in Util
|
|
simulator console
This means the "show stats" command is now active on the robust console.
|
|
(debug level 6) on outgoing requests, depending on debug level
This is set via "debug http out <level>"
This matches the existing debug level behaviours for logging incoming http data
|
|
belong to which HttpServer
|
|
in 3eee991 but removed in 7c0bfca
Do not rely on destructors to stop things.
These fire at unpredictable times and cause problems such as http://opensimulator.org/mantis/view.php?id=6503
and most probably http://opensimulator.org/mantis/view.php?id=6668
|
|
|
|
|
|
startup, logging an error since this is commonly due to an unclean shutdown.
Unclean shutdown can cause constantly moving objects to disappear if an OAR has just been loaded and they have not reached persistence time threshold, among other problems.
|
|
|
|
connection close issue by getting rid of the socket references * This adds a connection timeout checker to shutdown poor or evil connections and combats DOS attempts that just connect and make no complete requests and just wait. It also actually implements KeepAlive... instead of just understanding the connection header in the request... you can test by connecting and requesting a keepalive header and sending another request on the same connection. The new timeout checker closes expired keepalive sessions, just make sure you send the request within 70 seconds of connecting or the timeout checker will timeout the connection.
|
|
simulator logs, for debug purposes
|
|
would like to restrict the maximum packet size, (and therefore protect against Memory DOSing) then you should set this. I defaulted it to 40MB. This means that in theory, a malicious user could connect and send a packet that claims that the payload is up to 40 mb (even if it doesn't actually turn out to be 40mb. More testing needs to be done on it where the packets are maliciously malformed.
|
|
|