Delphi Programming

NexusDB multi-threaded applications

2,918pages on
this wiki
Add New Page
Add New Page Talk0

Usage of NexusDB in a multi-threaded applicationEdit

Adapted from newgroup postings to by Thorsten Engler of NexusDB support.

Basic requirementsEdit

  • you need one TnxServerEngine / TnxRemoteServerEngine which can be shared for all threads
  • a session and all components attached to it (TransContext, Database, Table, Query) can only be used by a single thread at a time.

It's important to point out here that sessions are thread-agnostic. Nothing stops you from creating a session in one thread and later using it in another, as long as only a single thread makes calls against anything associated with that session at the same time. The low level code has a locking mechanism that detects access by multiple threads at the same time and will return DBIERR_REENTERED errors.

Styles of multi-threaded applicationsEdit

In general there are 3 ways I can see here:

create a new TnxSession manually for each threadEdit

That's the way to go if you have long running threads that are under your control.

use the TnxSessionPool componentEdit

The TnxSessionPool component has an AcquireSession function which returns a TnxPooledSession that you can use.

That's the way to go if you have many requests coming in using arbitrary threads that aren't under your control and the requests are different in regards to the queries / tables you need.

Create your own class which contains session / database / table componentsEdit

Create an class of your own which contains session / database / table components that you require, all setup and ready to use. Then write a pooling system of your own (TnxThreadSafeStack should help you there).

That's the way to go if you have many requests coming in using arbitrary threads that aren't under your control and the requests are the same in regards to the queries / tables you need.

Automatic Connection PoolingEdit

No matter if you pool the sessions or open/close them, in case of a remote server and access via one of the NexusDB transports *physical* connections are pooled inside the transport already, e.g. if you have 1000 open sessions in 1000 threads, but at the same time only e.g. 5 threads are actually inside a call to the server at the same time, there will only be 5 physical connections between client and server.


The engine was created with scalability as a primary design goal. There are no hard limits on the numbers of open sessions/databases/tables/queries. Besides the memory used, the performance impact of a open component is usually 0, e.g. if you have 1000 open cursors, all on the same table, in FF2 and other DB engines, if one of these cursors modifies that table, all the other cursors need to be informed. In NexusDB this is not the case, the other cursors will detect changes as required when they are actually used.

Many other engines use sorted lists to manage their internal object, as a result performance degrades as the size of these sorted lists increase and inserting/removing takes longer. NexusDB is using other ways to manage it's internal objects which are completely independent of the number of open objects. The only real limitation is the amount of physical memory and virtual address space available to the server.

Thread safetyEdit

Only thing you need to keep track of is that a session is only used by a single thread at the same time. Everything else is taken care of inside the engine.

The engine itself is fully thread safe and again is optimized for scalability. e.g. any number of threads can perform read operations against the same table at the same time. And even insert/delete/post operations take place without locking out readers. Write operations always need to run in the context of a transaction, explicit or implicit, all modifications are applied to transaction specific copies of the blocks while read operations can continue against the last committed version without any locks.

Only during the commit if a transaction, and only for the time it takes to replace some pointers inside the buffer manager, are read operations locked. The actual disk I/O of the commit process takes place after read operations are already allowed again. This locking takes place on a per table basis.

Also on Fandom

Random Wiki