Print this page.  If your browser doesn't allow JavaScript, right-click this page and choose Print from the popup-menu.        
Improve Database Application Performance in a Network
Categories: Programming Examples, SQL

Databases have been around in client/server environments for quite a while. Modern, 3-tier architectures are examples of environments where database logic has been separated from the database server using network communication.

However, you can get some unpleasant surprises and poor performance when you deploy a database application in a large network.

In this article, one of our Mimer SQL developers, Per Schröder, discusses what role your choice of programming techniques plays in the level of performance you can expect from your database application.

Over to Per:

About The Test Application

In order to illustrate how different programming techniques can influence application performance, I wrote a simple test application (available for review and download here).

The purpose of the application is to read the contents of an order database and compare the result with the contents of a corresponding stock-level database. The application then sets the status of each order depending on whether or not the order can be filled. In total, the application reads more than 60,000 database entries and updates almost as many entries.

Solution A

My first solution involved writing a test application that implemented Open DataBase Connectivity (ODBC).

I wrote the application in the simplest way possible. When I ran the application on my local computer, it took 178 seconds to complete. When I ran it on over our network, it took 1270 seconds to complete.
The main problem with this implementation is that the program fetches and reads all of the entries row by row. Of course, this means an enormous amount of network communication.

Unfortunately, this is the way many ODBC applications are written. One common reason for this is that a developer uses a tool that doesn’t give him/her control over which ODBC calls that are generated.

Solution B

I re-wrote my test application so that it used bulk fetch and bulk update. This meant that it could transport a larger number of entries for each network call it made.

Also, in solution A, my application compiled each SQL statement (SQLExecDirect). In solution B, I wrote it so that it used pre-compiled SQL statements (SQLPrepare and SQLExecute). The result was that, using the same test material, the application took 21 seconds to run locally and 40 seconds to run over the network – 30 times faster than solution A!

Solution A vs. Solution B

Why such a dramatic difference in performance? One simple reason is that it depends on how many network packages an application needs to send. Solution A made 218865 network calls. Solution B, using bulk operations, only had to make 4133 network calls. In my network, it takes 5 ms to make a call from a client to a server and receive a response. So, solution A needs 1094 seconds (218865 x 0,005) for network traffic alone. You can see how fast your network traffic is by pinging your server from the client.

Another reason is that I avoided compiling every SQL statement using SQLExecDirect(). Admittedly, it’s a lot easier to write an application using SQLExecDirect(), but you can dramatically decrease your database server load by using prepared SQL statements.

Some people say that ODBC is an interface that gives poor performance. In my opinion, it all depends on how you use it. Use it right and you can get excellent performance.

But Now for Solution C

By placing all the logic in the database server in the form of stored procedures (PSMs), there is no need to transport such large amounts of data over the network. I implemented the necessary PSMs on my database server and got the data I wanted in only 6 seconds, both locally and via the network.

So, PSMs are the preferable solution if you have some database logic that needs to examine or update large amounts of data.

The Bottom Line

The graph below shows what a difference using different SQL programming methods can make. In fact, the result is so good, you can hardly see it!

Some Tips
  • Place the database logic as close to the database server as possible. Preferably in the database server by using PSMs.
  • Use bulk operations whenever you can. They can decrease the number of network calls your application has to make.
  • Don’t fetch more data to your client than it needs, for example, data to be displayed in a window. Let SQL or PSMs process the data.
  • Find out what your ping times are in your network and work out the number of network calls your application makes. This gives you an idea of the amount of time that is spent on network communication. You can reduce this time by upgrading your network, or by reducing the number of network calls your application needs to make.
  • Use prepared statements and parameter markers that you can recycle. This minimizes the number of compilations and SQL optimizations that the database server has to perform.
  • Avoid creating each SQL statement dynamically. Many applications construct each SQL statement dynamically by constructing a string of SQL commands and the current values of some variables. While this type of programming may be easy to do, it requires countless SQL compilations by the server. A better way is to create a few prepared statements with parameter markers.
  • If you’re in a 3-tier architecture, find out how the middleware component communicates with the server. Then you can use the tips above on the middleware tier.


  • You can review my test application here
  • Read more about PSMs and prepared statements in the Programmer's Manual found in the Mimer SQL Documentation Set, html navigation.


Mimer SQL Documentation Set, html navigation. (See the Programmer's Manual.)

Mimer SQL Documentation Set, PDF-file. (See the Programmer's Manual.)

Last updated: 2002-08-27


Powered by Mimer SQL

Powered by Mimer SQL