StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
behavior profiling is a somewhat complicated topic.
behavior profiling is a somewhat complicated topic.
Interfaces
Interfaces
...
@@ -10,7 +10,7 @@ Interfaces
...
@@ -10,7 +10,7 @@ Interfaces
The profiling interface is composed of `IProfiler`, `ConnectionMultiplexer.RegisterProfiler(IProfiler)`, `ConnectionMultiplexer.BeginProfiling(object)`,
The profiling interface is composed of `IProfiler`, `ConnectionMultiplexer.RegisterProfiler(IProfiler)`, `ConnectionMultiplexer.BeginProfiling(object)`,
`ConnectionMultiplexer.FinishProfiling(object)`, and `IProfiledCommand`.
`ConnectionMultiplexer.FinishProfiling(object)`, and `IProfiledCommand`.
You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread,
You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread,
Http Request, and so on) by calling `BeginProfiling(object)`, and finish by calling `FinishProfiling(object)`. `FinishProfiling(object)` returns
Http Request, and so on) by calling `BeginProfiling(object)`, and finish by calling `FinishProfiling(object)`. `FinishProfiling(object)` returns
a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the configured `ConnectionMultiplexer` between
a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the configured `ConnectionMultiplexer` between
the `(Begin|Finish)Profiling` calls with the given context.
the `(Begin|Finish)Profiling` calls with the given context.
...
@@ -20,7 +20,7 @@ What "context" object should be used is application specific.
...
@@ -20,7 +20,7 @@ What "context" object should be used is application specific.
Available Timings
Available Timings
---
---
StackExchange.Redis exposes information about:
StackExchange.Redis exposes information about:
- The redis server involved
- The redis server involved
- The redis DB being queried
- The redis DB being queried
...
@@ -144,12 +144,12 @@ for (var i = 0; i < 16; i++)
...
@@ -144,12 +144,12 @@ for (var i = 0; i < 16; i++)
threads.Add(thread);
threads.Add(thread);
}
}
threads.ForEach(thread=>thread.Start());
threads.ForEach(thread=>thread.Start());
threads.ForEach(thread=>thread.Join());
threads.ForEach(thread=>thread.Join());
```
```
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them.
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.
The `"-"` and `"+"` special characters indicate the smallest and greatest IDs possible. These values are the default values that will be used if no value is passed for the respective parameter. You also have the option to read the stream in reverse by using the `messageOrder` parameter. The `StreamRange` method also provides the ability to limit the number of entries returned by using the `count` parameter.
The `"-"` and `"+"` special characters indicate the smallest and greatest IDs possible. These values are the default values that will be used if no value is passed for the respective parameter. You also have the option to read the stream in reverse by using the `messageOrder` parameter. The `StreamRange` method also provides the ability to limit the number of entries returned by using the `count` parameter.
```csharp
```csharp
varmessages=db.StreamRange("event_stream",
varmessages=db.StreamRange("event_stream",
minId:"0-0",
minId:"0-0",
maxId:"+",
maxId:"+",
count:100,
count:100,
messageOrder:Order.Descending);
messageOrder:Order.Descending);
```
```
...
@@ -168,10 +168,10 @@ The `StreamClaim` method can be used to change ownership of messages consumed by
...
@@ -168,10 +168,10 @@ The `StreamClaim` method can be used to change ownership of messages consumed by
```csharp
```csharp
// Change ownership to consumer_2 for the first 5 messages pending for consumer_1.
// Change ownership to consumer_2 for the first 5 messages pending for consumer_1.
There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used.
There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used.
Verify what's the maximum bandwidth supported on your client and on the server where redis-server is hosted. If there are requests that are getting bound by bandwidth, it will take longer for them to complete and thereby can cause timeouts.
Verify what's the maximum bandwidth supported on your client and on the server where redis-server is hosted. If there are requests that are getting bound by bandwidth, it will take longer for them to complete and thereby can cause timeouts.
Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.
Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.
Are there commands taking long time to process on the redis-server?
Are there commands taking a long time to process on the redis-server?
---------------
---------------
There can be commands that are taking long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
There can be commands that are taking a long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
Was there a big request preceding several small requests to the Redis that timed out?
Was there a big request preceding several small requests to the Redis that timed out?
---------------
---------------
The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout.
The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout.
Are you seeing a high number of busyio or busyworker threads in the timeout exception?
Are you seeing high number of busyio or busyworker threads in the timeout exception?
---------------
---------------
Asynchronous operations in StackExchange.Redis can come back in 3 different ways:
Asynchronous operations in StackExchange.Redis can come back in 3 different ways:
...
@@ -21,13 +20,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways
...
@@ -21,13 +20,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways
- From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations.
- From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations.
- .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool.
- .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool.
The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them.
The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them.
.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which is injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
1. An existing thread becomes free to process the work
1. An existing thread becomes free to process the work
2. No existing thread becomes free for 500ms, so a new thread is created.
2. No existing thread becomes free for 500ms, so a new thread is created.
...
@@ -36,16 +33,16 @@ Basically, *if* you're hitting the global thread pool (rather than the dedicated
...
@@ -36,16 +33,16 @@ Basically, *if* you're hitting the global thread pool (rather than the dedicated
If we look at an example error message from StackExchange.Redis 2.0, you will see that it now prints ThreadPool statistics (see IOCP and WORKER details below).
If we look at an example error message from StackExchange.Redis 2.0, you will see that it now prints ThreadPool statistics (see IOCP and WORKER details below).
Timeout performing GET MyKey (1000ms), inst: 2, qs: 6, in: 0, mgr: 9 of 10 available,
Timeout performing GET MyKey (1000ms), inst: 2, qs: 6, in: 0, mgr: 9 of 10 available,
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
WORKER: (Busy=3,Free=997,Min=4,Max=1000)
WORKER: (Busy=3,Free=997,Min=4,Max=1000)
In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads.
In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads.
In 1.*, the information is similar but slightly different:
In 1.*, the information is similar but slightly different:
System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
It may seem contradictory that there are *less* numbers in 2.0 - this is because the 2.0 code has been redesigned not to require some additional steps.
It may seem contradictory that there are *less* numbers in 2.0 - this is because the 2.0 code has been redesigned not to require some additional steps.