D++ (DPP)
C++ Discord API Bot Library
|
The request_queue class manages rate limits and marshalls HTTP requests that have been built as http_request objects. More...
Classes | |
struct | completed_request |
A completed request. Contains both the request and the response. More... | |
struct | queued_deleting_request |
A request queued for deletion in the queue. More... | |
Public Member Functions | |
request_queue (class cluster *owner, uint32_t request_threads=8) | |
constructor More... | |
request_queue & | add_request_threads (uint32_t request_threads) |
Add more request threads to the library at runtime. More... | |
uint32_t | get_request_thread_count () const |
Get the request thread count. More... | |
~request_queue () | |
Destroy the request queue object. Side effects: Joins and deletes queue threads. More... | |
request_queue & | post_request (std::unique_ptr< http_request > req) |
Put a http_request into the request queue. More... | |
bool | is_globally_ratelimited () const |
Returns true if the bot is currently globally rate limited. More... | |
Protected Member Functions | |
void | out_loop () |
Outbound queue thread loop. More... | |
Protected Attributes | |
class cluster * | creator |
The cluster that owns this request_queue. More... | |
std::shared_mutex | out_mutex |
Outbound queue mutex thread safety. More... | |
std::thread * | out_thread |
Outbound queue thread Note that although there are many 'in queues', which handle the HTTP requests, there is only ever one 'out queue' which dispatches the results to the caller. This is to simplify thread management in bots that use the library, as less mutexing and thread safety boilerplate is required. More... | |
std::condition_variable | out_ready |
Outbound queue condition. Signalled when there are requests completed to call callbacks for. More... | |
std::queue< completed_request > | responses_out |
Completed requests queue. More... | |
std::vector< std::unique_ptr< in_thread > > | requests_in |
A vector of inbound request threads forming a pool. There are a set number of these defined by a constant in queues.cpp. A request is always placed on the same element in this vector, based upon its url, so that two conditions are satisfied: 1) Any requests for the same ratelimit bucket are handled by the same thread in the pool so that they do not create unnecessary 429 errors, 2) Requests for different endpoints go into different buckets, so that they may be requested in parallel A global ratelimit event pauses all threads in the pool. These are few and far between. More... | |
std::vector< queued_deleting_request > | responses_to_delete |
Completed requests to delete. Sorted by deletion time. More... | |
std::atomic< bool > | terminating |
Set to true if the threads should terminate. More... | |
bool | globally_ratelimited |
True if globally rate limited - makes the entire request thread wait. More... | |
uint64_t | globally_limited_for |
How many seconds we are globally rate limited for. More... | |
uint32_t | in_thread_pool_size |
Number of request threads in the thread pool. More... | |
Friends | |
class | in_thread |
Required so in_thread can access these member variables. More... | |
The request_queue class manages rate limits and marshalls HTTP requests that have been built as http_request objects.
It ensures asynchronous delivery of events and queueing of requests.
It will spawn two threads, one to make outbound HTTP requests and push the returned results into a queue, and the second to call the callback methods with these results. They are separated so that if the user decides to take a long time processing a reply in their callback it won't affect when other requests are sent, and if a HTTP request takes a long time due to latency, it won't hold up user processing.
There are usually two request_queue objects in each dpp::cluster, one of which is used internally for the various REST methods to Discord such as sending messages, and the other used to support user REST calls via dpp::cluster::request().
dpp::request_queue::request_queue | ( | class cluster * | owner, |
uint32_t | request_threads = 8 |
||
) |
constructor
owner | The creating cluster. |
request_threads | The number of http request threads to allocate to the threadpool. By default eight threads are allocated. Side effects: Creates threads for the queue |
dpp::request_queue::~request_queue | ( | ) |
Destroy the request queue object. Side effects: Joins and deletes queue threads.
request_queue& dpp::request_queue::add_request_threads | ( | uint32_t | request_threads | ) |
Add more request threads to the library at runtime.
request_threads | Number of threads to add. It is not possible to scale down at runtime. |
uint32_t dpp::request_queue::get_request_thread_count | ( | ) | const |
Get the request thread count.
bool dpp::request_queue::is_globally_ratelimited | ( | ) | const |
Returns true if the bot is currently globally rate limited.
|
protected |
Outbound queue thread loop.
request_queue& dpp::request_queue::post_request | ( | std::unique_ptr< http_request > | req | ) |
Put a http_request into the request queue.
req | request to add |
|
protected |
The cluster that owns this request_queue.
|
protected |
How many seconds we are globally rate limited for.
|
protected |
True if globally rate limited - makes the entire request thread wait.
|
protected |
Number of request threads in the thread pool.
|
protected |
Outbound queue mutex thread safety.
|
protected |
Outbound queue condition. Signalled when there are requests completed to call callbacks for.
|
protected |
Outbound queue thread Note that although there are many 'in queues', which handle the HTTP requests, there is only ever one 'out queue' which dispatches the results to the caller. This is to simplify thread management in bots that use the library, as less mutexing and thread safety boilerplate is required.
|
protected |
A vector of inbound request threads forming a pool. There are a set number of these defined by a constant in queues.cpp. A request is always placed on the same element in this vector, based upon its url, so that two conditions are satisfied: 1) Any requests for the same ratelimit bucket are handled by the same thread in the pool so that they do not create unnecessary 429 errors, 2) Requests for different endpoints go into different buckets, so that they may be requested in parallel A global ratelimit event pauses all threads in the pool. These are few and far between.
|
protected |
Completed requests queue.
|
protected |
Completed requests to delete. Sorted by deletion time.
|
protected |
Set to true if the threads should terminate.