Commit ca84cd13 authored by Vince Pergolizzi's avatar Vince Pergolizzi

fix small typos in docs

parent d37289b2
......@@ -149,7 +149,7 @@ threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join());
```
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them.
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
......
......@@ -15,8 +15,6 @@ the library invokes this callback, and *if* a non-null session is returned: oper
a particular profiling sesssion returns a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the
configured `ConnectionMultiplexer`. It is the callback's responsibility to maintain any state required to track individual sessions.
Available Timings
---
......@@ -36,14 +34,13 @@ StackExchange.Redis exposes information about:
`TimeSpan`s are high resolution, if supported by the runtime. `DateTime`s are only as precise as `DateTime.UtcNow`.
Example profilers
---
Due to StackExchange.Redis's asynchronous interface, profiling requires outside assistance to group related commands together.
This is achieved by providing the desired `ProfilingSession` object via the callback, and (later) calling `FinishProfiling()` on that session.
Probably the most useful general-purpose session-provider is one that provides session automatically and works between `async` calls; this is simply:
Probably the most useful general-purpose session-provider is one that provides sessions automatically and works between `async` calls; this is simply:
```csharp
class AsyncLocalProfiler
......@@ -173,7 +170,7 @@ threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join());
```
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them.
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
......
......@@ -182,7 +182,3 @@ db.StreamClaim("events_stream",
```
There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used.
......@@ -35,7 +35,7 @@ on whether you have a `SynchronizationContext`. If you *don't* (common for conso
services, etc), then the TPL uses the standard thread-pool mechanisms to schedule the
continuation. If you *do* have a `SynchronizationContext` (common in UI applications
and web-servers), then its `Post` method is used instead; the `Post` method is *meant* to
be an asynchronous dispatch API. But... not all impementations are equal. Some
be an asynchronous dispatch API. But... not all implementations are equal. Some
`SynchronizationContext` implementations treat `Post` as a synchronous invoke. This is true
in particular of `LegacyAspNetSynchronizationContext`, which is what you get if you
configure ASP.NET with:
......
......@@ -3,16 +3,15 @@
Verify what's the maximum bandwidth supported on your client and on the server where redis-server is hosted. If there are requests that are getting bound by bandwidth, it will take longer for them to complete and thereby can cause timeouts.
Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.
Are there commands taking long time to process on the redis-server?
Are there commands taking a long time to process on the redis-server?
---------------
There can be commands that are taking long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
There can be commands that are taking a long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
Was there a big request preceding several small requests to the Redis that timed out?
---------------
The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout.
Are you seeing high number of busyio or busyworker threads in the timeout exception?
Are you seeing a high number of busyio or busyworker threads in the timeout exception?
---------------
Asynchronous operations in StackExchange.Redis can come back in 3 different ways:
......@@ -21,13 +20,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways
- From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations.
- .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool.
The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them.
.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which is injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
1. An existing thread becomes free to process the work
2. No existing thread becomes free for 500ms, so a new thread is created.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment