Unverified Commit f5d49c24 authored by Nick Craver's avatar Nick Craver Committed by GitHub

Merge pull request #1339 from vp89/master

fix small typos in docs
parents 8c36d5b8 f572d3b0
Profiling Profiling
=== ===
StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
behavior profiling is a somewhat complicated topic. behavior profiling is a somewhat complicated topic.
Interfaces Interfaces
...@@ -10,7 +10,7 @@ Interfaces ...@@ -10,7 +10,7 @@ Interfaces
The profiling interface is composed of `IProfiler`, `ConnectionMultiplexer.RegisterProfiler(IProfiler)`, `ConnectionMultiplexer.BeginProfiling(object)`, The profiling interface is composed of `IProfiler`, `ConnectionMultiplexer.RegisterProfiler(IProfiler)`, `ConnectionMultiplexer.BeginProfiling(object)`,
`ConnectionMultiplexer.FinishProfiling(object)`, and `IProfiledCommand`. `ConnectionMultiplexer.FinishProfiling(object)`, and `IProfiledCommand`.
You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread, You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread,
Http Request, and so on) by calling `BeginProfiling(object)`, and finish by calling `FinishProfiling(object)`. `FinishProfiling(object)` returns Http Request, and so on) by calling `BeginProfiling(object)`, and finish by calling `FinishProfiling(object)`. `FinishProfiling(object)` returns
a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the configured `ConnectionMultiplexer` between a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the configured `ConnectionMultiplexer` between
the `(Begin|Finish)Profiling` calls with the given context. the `(Begin|Finish)Profiling` calls with the given context.
...@@ -20,7 +20,7 @@ What "context" object should be used is application specific. ...@@ -20,7 +20,7 @@ What "context" object should be used is application specific.
Available Timings Available Timings
--- ---
StackExchange.Redis exposes information about: StackExchange.Redis exposes information about:
- The redis server involved - The redis server involved
- The redis DB being queried - The redis DB being queried
...@@ -144,12 +144,12 @@ for (var i = 0; i < 16; i++) ...@@ -144,12 +144,12 @@ for (var i = 0; i < 16; i++)
threads.Add(thread); threads.Add(thread);
} }
threads.ForEach(thread => thread.Start()); threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join()); threads.ForEach(thread => thread.Join());
``` ```
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them. `perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application. Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
...@@ -199,10 +199,10 @@ protected void Application_EndRequest() ...@@ -199,10 +199,10 @@ protected void Application_EndRequest()
if (ctxObj != null) if (ctxObj != null)
{ {
var timings = RedisConnection.FinishProfiling(ctxObj); var timings = RedisConnection.FinishProfiling(ctxObj);
// do what you will with `timings` here // do what you will with `timings` here
} }
} }
``` ```
This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them. This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
\ No newline at end of file
Profiling Profiling
=== ===
StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
behavior profiling is a somewhat complicated topic. behavior profiling is a somewhat complicated topic.
Interfaces Interfaces
...@@ -15,12 +15,10 @@ the library invokes this callback, and *if* a non-null session is returned: oper ...@@ -15,12 +15,10 @@ the library invokes this callback, and *if* a non-null session is returned: oper
a particular profiling sesssion returns a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the a particular profiling sesssion returns a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the
configured `ConnectionMultiplexer`. It is the callback's responsibility to maintain any state required to track individual sessions. configured `ConnectionMultiplexer`. It is the callback's responsibility to maintain any state required to track individual sessions.
Available Timings Available Timings
--- ---
StackExchange.Redis exposes information about: StackExchange.Redis exposes information about:
- The redis server involved - The redis server involved
- The redis DB being queried - The redis DB being queried
...@@ -36,14 +34,13 @@ StackExchange.Redis exposes information about: ...@@ -36,14 +34,13 @@ StackExchange.Redis exposes information about:
`TimeSpan`s are high resolution, if supported by the runtime. `DateTime`s are only as precise as `DateTime.UtcNow`. `TimeSpan`s are high resolution, if supported by the runtime. `DateTime`s are only as precise as `DateTime.UtcNow`.
Example profilers Example profilers
--- ---
Due to StackExchange.Redis's asynchronous interface, profiling requires outside assistance to group related commands together. Due to StackExchange.Redis's asynchronous interface, profiling requires outside assistance to group related commands together.
This is achieved by providing the desired `ProfilingSession` object via the callback, and (later) calling `FinishProfiling()` on that session. This is achieved by providing the desired `ProfilingSession` object via the callback, and (later) calling `FinishProfiling()` on that session.
Probably the most useful general-purpose session-provider is one that provides session automatically and works between `async` calls; this is simply: Probably the most useful general-purpose session-provider is one that provides sessions automatically and works between `async` calls; this is simply:
```csharp ```csharp
class AsyncLocalProfiler class AsyncLocalProfiler
...@@ -168,12 +165,12 @@ for (var i = 0; i < 16; i++) ...@@ -168,12 +165,12 @@ for (var i = 0; i < 16; i++)
); );
threads.Add(thread); threads.Add(thread);
} }
threads.ForEach(thread => thread.Start()); threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join()); threads.ForEach(thread => thread.Join());
``` ```
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them. `perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application. Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
...@@ -217,7 +214,7 @@ protected void Application_EndRequest() ...@@ -217,7 +214,7 @@ protected void Application_EndRequest()
if (session != null) if (session != null)
{ {
var timings = session.FinishProfiling(); var timings = session.FinishProfiling();
// do what you will with `timings` here // do what you will with `timings` here
} }
} }
...@@ -229,4 +226,4 @@ and ensure that the connection has the profiler registered when the connection i ...@@ -229,4 +226,4 @@ and ensure that the connection has the profiler registered when the connection i
connection.RegisterProfiler(() => _redisProfiler.GetSession()); connection.RegisterProfiler(() => _redisProfiler.GetSession());
``` ```
This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them. This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
\ No newline at end of file
...@@ -25,7 +25,7 @@ var values = new NameValueEntry[] ...@@ -25,7 +25,7 @@ var values = new NameValueEntry[]
{ {
new NameValueEntry("sensor_id", "1234"), new NameValueEntry("sensor_id", "1234"),
new NameValueEntry("temp", "19.8") new NameValueEntry("temp", "19.8")
}; };
var db = redis.GetDatabase(); var db = redis.GetDatabase();
var messageId = db.StreamAdd("sensor_stream", values); var messageId = db.StreamAdd("sensor_stream", values);
...@@ -63,7 +63,7 @@ Console.WriteLine($"Length = {streams.First().Entries.Length}"); ...@@ -63,7 +63,7 @@ Console.WriteLine($"Length = {streams.First().Entries.Length}");
You can limit the number of messages returned per stream by using the `countPerStream` optional parameter. You can limit the number of messages returned per stream by using the `countPerStream` optional parameter.
The `StreamRange` method allows you to return a range of entries within a stream. The `StreamRange` method allows you to return a range of entries within a stream.
```csharp ```csharp
var messages = db.StreamRange("event_stream", minId: "-", maxId: "+"); var messages = db.StreamRange("event_stream", minId: "-", maxId: "+");
...@@ -72,9 +72,9 @@ var messages = db.StreamRange("event_stream", minId: "-", maxId: "+"); ...@@ -72,9 +72,9 @@ var messages = db.StreamRange("event_stream", minId: "-", maxId: "+");
The `"-"` and `"+"` special characters indicate the smallest and greatest IDs possible. These values are the default values that will be used if no value is passed for the respective parameter. You also have the option to read the stream in reverse by using the `messageOrder` parameter. The `StreamRange` method also provides the ability to limit the number of entries returned by using the `count` parameter. The `"-"` and `"+"` special characters indicate the smallest and greatest IDs possible. These values are the default values that will be used if no value is passed for the respective parameter. You also have the option to read the stream in reverse by using the `messageOrder` parameter. The `StreamRange` method also provides the ability to limit the number of entries returned by using the `count` parameter.
```csharp ```csharp
var messages = db.StreamRange("event_stream", var messages = db.StreamRange("event_stream",
minId: "0-0", minId: "0-0",
maxId: "+", maxId: "+",
count: 100, count: 100,
messageOrder: Order.Descending); messageOrder: Order.Descending);
``` ```
...@@ -168,10 +168,10 @@ The `StreamClaim` method can be used to change ownership of messages consumed by ...@@ -168,10 +168,10 @@ The `StreamClaim` method can be used to change ownership of messages consumed by
```csharp ```csharp
// Change ownership to consumer_2 for the first 5 messages pending for consumer_1. // Change ownership to consumer_2 for the first 5 messages pending for consumer_1.
var pendingMessages = db.StreamPendingMessages("events_stream", var pendingMessages = db.StreamPendingMessages("events_stream",
"events_cg", "events_cg",
count: 5, count: 5,
consumerName: "consumer_1", consumerName: "consumer_1",
minId: "0-0"); minId: "0-0");
db.StreamClaim("events_stream", db.StreamClaim("events_stream",
...@@ -182,7 +182,3 @@ db.StreamClaim("events_stream", ...@@ -182,7 +182,3 @@ db.StreamClaim("events_stream",
``` ```
There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used. There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used.
...@@ -35,7 +35,7 @@ on whether you have a `SynchronizationContext`. If you *don't* (common for conso ...@@ -35,7 +35,7 @@ on whether you have a `SynchronizationContext`. If you *don't* (common for conso
services, etc), then the TPL uses the standard thread-pool mechanisms to schedule the services, etc), then the TPL uses the standard thread-pool mechanisms to schedule the
continuation. If you *do* have a `SynchronizationContext` (common in UI applications continuation. If you *do* have a `SynchronizationContext` (common in UI applications
and web-servers), then its `Post` method is used instead; the `Post` method is *meant* to and web-servers), then its `Post` method is used instead; the `Post` method is *meant* to
be an asynchronous dispatch API. But... not all impementations are equal. Some be an asynchronous dispatch API. But... not all implementations are equal. Some
`SynchronizationContext` implementations treat `Post` as a synchronous invoke. This is true `SynchronizationContext` implementations treat `Post` as a synchronous invoke. This is true
in particular of `LegacyAspNetSynchronizationContext`, which is what you get if you in particular of `LegacyAspNetSynchronizationContext`, which is what you get if you
configure ASP.NET with: configure ASP.NET with:
...@@ -66,4 +66,4 @@ both appropriate and necessary, and does not represent additional overhead. ...@@ -66,4 +66,4 @@ both appropriate and necessary, and does not represent additional overhead.
The library will attempt to detect `LegacyAspNetSynchronizationContext` in particular, The library will attempt to detect `LegacyAspNetSynchronizationContext` in particular,
but this is not always reliable. The flag is also available for manual use with other but this is not always reliable. The flag is also available for manual use with other
similar scenarios. similar scenarios.
\ No newline at end of file
...@@ -4,20 +4,19 @@ Verify what's the maximum bandwidth supported on your client and on the server w ...@@ -4,20 +4,19 @@ Verify what's the maximum bandwidth supported on your client and on the server w
Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts. Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.
Are you experiencing "thread theft" of the reader? Are you experiencing "thread theft" of the reader?
--- ---------------
The parameter “`qs`” in the error message tells you the state of the reader; if this is frquently reporting `CompletePendingMessage`, The parameter “`qs`” in the error message tells you the state of the reader; if this is frquently reporting `CompletePendingMessage`,
it is possible that the reader loop has been hijacked; see [Thread Theft](ThreadTheft) for specific guidance. it is possible that the reader loop has been hijacked; see [Thread Theft](ThreadTheft) for specific guidance.
Are there commands taking long time to process on the redis-server? Are there commands taking a long time to process on the redis-server?
--------------- ---------------
There can be commands that are taking long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog). There can be commands that are taking a long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
Was there a big request preceding several small requests to the Redis that timed out? Was there a big request preceding several small requests to the Redis that timed out?
--------------- ---------------
The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout. The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout.
Are you seeing a high number of busyio or busyworker threads in the timeout exception?
Are you seeing high number of busyio or busyworker threads in the timeout exception?
--------------- ---------------
Asynchronous operations in StackExchange.Redis can come back in 3 different ways: Asynchronous operations in StackExchange.Redis can come back in 3 different ways:
...@@ -26,13 +25,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways ...@@ -26,13 +25,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways
- From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations. - From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations.
- .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool. - .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool.
The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them. The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them.
.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system. .NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which is injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen: For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
1. An existing thread becomes free to process the work 1. An existing thread becomes free to process the work
2. No existing thread becomes free for 500ms, so a new thread is created. 2. No existing thread becomes free for 500ms, so a new thread is created.
...@@ -41,16 +38,16 @@ Basically, *if* you're hitting the global thread pool (rather than the dedicated ...@@ -41,16 +38,16 @@ Basically, *if* you're hitting the global thread pool (rather than the dedicated
If we look at an example error message from StackExchange.Redis 2.0, you will see that it now prints ThreadPool statistics (see IOCP and WORKER details below). If we look at an example error message from StackExchange.Redis 2.0, you will see that it now prints ThreadPool statistics (see IOCP and WORKER details below).
Timeout performing GET MyKey (1000ms), inst: 2, qs: 6, in: 0, mgr: 9 of 10 available, Timeout performing GET MyKey (1000ms), inst: 2, qs: 6, in: 0, mgr: 9 of 10 available,
IOCP: (Busy=6,Free=994,Min=4,Max=1000), IOCP: (Busy=6,Free=994,Min=4,Max=1000),
WORKER: (Busy=3,Free=997,Min=4,Max=1000) WORKER: (Busy=3,Free=997,Min=4,Max=1000)
In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads. In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads.
In 1.*, the information is similar but slightly different: In 1.*, the information is similar but slightly different:
System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive, System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
IOCP: (Busy=6,Free=994,Min=4,Max=1000), IOCP: (Busy=6,Free=994,Min=4,Max=1000),
WORKER: (Busy=3,Free=997,Min=4,Max=1000) WORKER: (Busy=3,Free=997,Min=4,Max=1000)
It may seem contradictory that there are *less* numbers in 2.0 - this is because the 2.0 code has been redesigned not to require some additional steps. It may seem contradictory that there are *less* numbers in 2.0 - this is because the 2.0 code has been redesigned not to require some additional steps.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment