dbcore
Vars | |
all_queries | All the current queries that exist. |
---|---|
all_queries_num | Number of all queries, reset to 0 when logged in SStime_track. Used by SStime_track |
failed_connection_timeout | world.time that connection attempts can resume |
failed_connection_timeout_count | Total number of times connections have had to be timed out. |
failed_connections | Number of failed connection attempts this try. Resets after the timeout or successful connection |
max_connection_failures | Max number of consecutive failures before a timeout (here and not a define so it can be vv'ed mid round if needed) |
processing_queries | Queries being checked for timeouts. |
queries_active | Queries currently being handled by database driver |
queries_active_num | Number of active queries, reset to 0 when logged in SStime_track. Used by SStime_track |
queries_standby | Queries pending execution, mapped to complete arguments |
queries_standby_num | Number of standby queries, reset to 0 when logged in SStime_track. Used by SStime_track |
queued_log_entries_by_table | An associative list of list of rows to add to a database table with the name of the target table as a key. Used to queue up log entries for bigger queries that happen less often, to reduce the strain on the server caused by hundreds of additional queries constantly trying to run. |
shutting_down | We are in the process of shutting down and should not allow more DB connections |
Procs | |
QuerySelect | |
add_log_to_mass_insert_queue | This is a proc to hopefully mitigate the effect of a large influx of queries needing to be created
for something like SQL-based game logs. The goal here is to bundle a certain amount of those log
entries together (default is 100, but it depends on the SQL_GAME_LOG_MIN_BUNDLE_SIZE config
entry) before sending them, so that we can massively reduce the amount of lag associated with
logging so many entries to the database. |
create_active_query | Helper proc for handling activating queued queries |
reset_tracking | Resets the tracking numbers on the subsystem. Used by SStime_track. |
Var Details
all_queries
All the current queries that exist.
all_queries_num
Number of all queries, reset to 0 when logged in SStime_track. Used by SStime_track
failed_connection_timeout
world.time that connection attempts can resume
failed_connection_timeout_count
Total number of times connections have had to be timed out.
failed_connections
Number of failed connection attempts this try. Resets after the timeout or successful connection
max_connection_failures
Max number of consecutive failures before a timeout (here and not a define so it can be vv'ed mid round if needed)
processing_queries
Queries being checked for timeouts.
queries_active
Queries currently being handled by database driver
queries_active_num
Number of active queries, reset to 0 when logged in SStime_track. Used by SStime_track
queries_standby
Queries pending execution, mapped to complete arguments
queries_standby_num
Number of standby queries, reset to 0 when logged in SStime_track. Used by SStime_track
queued_log_entries_by_table
An associative list of list of rows to add to a database table with the name of the target table as a key. Used to queue up log entries for bigger queries that happen less often, to reduce the strain on the server caused by hundreds of additional queries constantly trying to run.
shutting_down
We are in the process of shutting down and should not allow more DB connections
Proc Details
QuerySelect
- QuerySelect
Run a list of query datums in parallel, blocking until they all complete.
- queries - List of queries or single query datum to run.
- warn - Controls rather warn_execute() or Execute() is called.
- qdel - If you don't care about the result or checking for errors, you can have the queries be deleted afterwards. This can be combined with invoke_async as a way of running queries async without having to care about waiting for them to finish so they can be deleted.
add_log_to_mass_insert_queue
This is a proc to hopefully mitigate the effect of a large influx of queries needing to be created
for something like SQL-based game logs. The goal here is to bundle a certain amount of those log
entries together (default is 100, but it depends on the SQL_GAME_LOG_MIN_BUNDLE_SIZE
config
entry) before sending them, so that we can massively reduce the amount of lag associated with
logging so many entries to the database.
Arguments:
- table - The name of the table to insert the log enty into.
- log_entry - Associative list representing all of the information that needs to be logged.
Default format is as follows, for the
game_log
table (even if this could be used for another table): list( "datetime" = ISOtime(), "round_id" = "[GLOB.round_id]", "ckey" = key_name(src), "loc" = loc_name(src), "type" = message_type, "message" = message, ) Take a look at/atom/proc/log_message()
for an example of implementation.
create_active_query
Helper proc for handling activating queued queries
reset_tracking
Resets the tracking numbers on the subsystem. Used by SStime_track.