A Modest Proposal

26 posts / 0 new
Last post
A Modest Proposal

It's been 40 years since coding anything in Pascal. I was wondering if there are wrappers or bindings that enable calling C library functions from the Pascal that CQR is written in. I assume there are. I really like CQR and I'm considering creating a version of my own with the code refactored to use SQLite instead of MySQL.

SQLite eliminates dependence on unreliable external packages like MySQL. It's small, fast, and linked directly into the program. I switched to using it for all my database needs years ago.

Among the things SQLite has going for it is it's the most widely deployed and extensively tested database in the world. There are billions of instances running right now on every Android device in the world. Everything stored on Android is in an SQLite database. US government agencies have been switching over to it to eliminate dependence on flaky outside vendors and problems exactly like the problems we're having with CQR. The US Navy has switched onboard ship systems to SQLite. Maintenance systems, damage control, fire suppression, weapons and fire control, and navigation on most US ships now use SQLite. Soon, all will. What's more, the SQLite file format (along with CSV and XML) is one of the few approved by the US Library of Congress for long term archival storage of data. This is because it's stable.

The one thing SQLite does not do is support concurrency. CQR obviously has no need for that. Most applications don't.

What do you think?


A Modest Proposal

HI !
Then it is a good time to recall Pascal :-)
I was using it at school with "big mainframe computer" that was weaker than simplest smartphone today and had only Teletypewriter terminals and paper stripe puncher to save and load programs (that exeeded storage quota).
After that used Borland Turbo Pascal 5.5 a bit, but then came a long cap. Tried first versions of Delphi, and Kylix when moved to Linux, but that did not feel good.

Free pascal / Lazarus finally recovered programming and cqrlog has been a good training environment.

The concurrency is out of my sql knowledge. If you mean same time users for same database cqrlog does use parallel threads that do their own tasks with database.
Also using several cqrlogs with their own local settings, but common log database, is possible.

There is also cqrweblog that uses same database via web server. You can do you portable logging via smartphone, even at same time when Ham shack computer uses standard cqrlog (with same external mysql server) https://github.com/dl8bh/cqrweblog

Actually the problem with cqrlog is that it is running mysql_safe thread that hold its data in ~/.config/cqrlog/database for easy backups. With some linuxes security settings prevents this, some not.
If external mysql server is used instead there are no problems but backups are more difficult and also sql user must be created for external sql server.

In addition Lazarus sql unit is a bit out of date and does not work properly any more with mysql 8.0, but does with MariaDB (sql 5.7 needed)

But nothing prevents you to make things in your way. Pascal source has around 2.5 million characters with nearly same amount in graphical definitions (lfm files).


A Modest Proposal

Hi Phil, Saku,

first of all, thank Petr, Saku and other contributors for great job named CQRLog.

I have been using the log for several years and, like others, I see that it is harder and harder to keep the CQRLog running after upgrading an operation system. Each distribution is specific and it is difficult to prepare one package for all.

Based on my experiences, SQLite is good for single-writer/multiple-reader applications. A concurrency access for multiple-writer is possible but to be honest, the support is not so good as in case of MySQL or Postgresql. You can used some kind of SQL serialization for SQLite but an application has to be designed for it. Therefore I think CQRLog with SQLite is a totally new project - maybe CQRLog V2 written in scripting language for rapid development with wide base of contributors.

Excuse my curiosity, maybe it has already been answered, but why CQRLog starts MySQL server from application?
Is it not easier to have a well-tuned external database and the application that connects to it than application what is starting database server from application? Why does application use HOME folder to store database files instead of default path?

I wish you Merry Christmas, Happy New Year and many DXs.


Hello Lada,

Hello Lada,

I haven't studied the code but I doubt that CQRlog requires any concurrency from the database. This is not a Walmart store that might have 50 asynchronous transactions, or more, happening simultaneously. CQRlog doesn't need any of the immense power of a database like MySQL. It needs something fast, rock-solid, and portable, like SQLite or BerkeleyDB. CQRlog is using only a tiny fraction of the power of MySQL, but must carry the burden of a complex package that is normally maintained by professonal IT personnel. Not to deprecate SQLite, which can handle databases over 200 terabytes in size.


ok2cqr's picture
Re: A Modest Proposal

Hi Lada,

first versions of CQRLOG used Firebird as a database server, user's data were stored in the $HOME because of easier backup. Also after a system reinstall, it was easier to not forget to backup the database when it was in your home directory. Also installing the application was much easier.

With version 1.0 CQRLOG moved to MySQL. As far as I remember the main reason was creating DEB packages. As far as I remember there were not DEB packages for Firebird in Debian repository. After a lot of testing if SQLite I decided to use MySQL. I think there is still an SQLite related branch in the git repository when I tried to use it.

I think that MySQL was a good choice but I did one big mistake - saving the database in the user's home directory. I should use the installed MySQL server and develop user-friendly backup for CQRLOG data. Right now I don't know how to fix the mistake and help users to move to MySQL server in the system. Any ideas are very appreciated.

Personally, I use CQRLOG on several computers with one central database running on our home server that provides also other services like DLNA, backup, running OwnCloud etc. Using SQLite instead of MySQL wouldn't allow this. CQRLOG could probably use MySQL embedded but I have no idea how to get it to work on numerous Linux distributions.

Right now CQRLOG has these main issues:

- MySQL data in the user's HOME directory
- GTK2 is getting obsolete and will be removed from distributions, we have QT5 port but it's still unstable because of bugs in Lazarus
- DEB, RPM packaging. It's real pain to release a new version, we need snap and flatpack packages, any help is really welcome

Another problem is with me. I work like a dog and barely have the energy to be on the air. CQRLOG source code is not in good condition and needs a lot of refactoring, starting again doesn't have sense because creating so big app like CQRLOG would create years. Step by step refactoring is better and I already started but only with small changes. Right now solving the main issues would help a lot because I could release new version monthly and it would be more fun. Releasing a new version took me several hours and that is horrible :(.

Merry Christmas and Happy New Year!

73 Petr, OK2CQR

Hello Petr!

Hello Petr!

Thank you for the detailed reply. I figured there was some history behind everything. There always is.

And, I don't mean to give the impression that I'm throwing rocks. It's easy for me to sit here and point at problems. Now that I have an upgraded machine and the right software installed properly, version 2.4.0 is working great. It was the difficulty of getting to this point that's the problem. And, the problem is not with CQRlog itself but external stuff that's all too easily misconfigured on a given system. The fact that I'm here discussing it shows that I like CQRlog and care about its future. :-)

I have no experience with Snap or Flatpack and until now have avoided both and any software that requires it. I imagine I can't keep that up forever. Hah.


A Modest Proposal

Thank Peter for your answer. It makes sense what you wrote above. I do not think that it was a BIG mistake to place MySQL to user's home directory. It was a one way how to easily keep everything together.

I have implemented few  projects where MySQL has been placed in chroot (non-standard folder). But I can confirm that it is not the best choice in case when you want to upgrade Linux - It's a nightmare because everything has to be reconfigured manually.

How to migrate to external MySQL? Maybe I have a naive view, but there are several ways how to do it
1) create a shell script which will convert CQRLog local DB to external MySQL DB. You can just enter connection parameters and schema name of a new external database and database objects can be migrated via SQL export.

2) it can be also done directly via CQRLog GUI new menu item - something like File->Export->Export SQLs or Migrate to External DB.

Mentioned migration option(s) will be available in next release (e.g. 2.5)  with a note that a next release (e.g. 3.0) will use only an external database. As I wrote it is a naive approach but sometime a naive approach can help in the future.

I am not a snap or flatpack specialist but when you will use an external database then snap or flatpack has not to be used. You can easily used DEB package as today - BTW a snap is Canonical's packaging system which is used widely only in Ubuntu.

GTK2 is a more serious problem but this is a longer discussion.

It is long long time when I wrote an application in Pascal (especially in Turbo Pascal) - I have not experiences with Lazarus. Therefore, I cannot help you with CQRLog but I can help you with migration script if you find it useful. Just like with you, there is not much free time but the migration script is possible to implement.

@Phil: As I briefly look at the source code, I can see that CQRLog has many threads and every thread can potentially use database. I cannot confirm how many threads use the database but CQRLog is designed as multithread application. Therefore, it is not easy to switch to SQLite without rewriting or redesigning CQRLog.

I also found an interesting project - CloudLog by 2M0SQL where Log is designed as a service (web service). This approach is enterprise-orientated but it has several benefits like separating the database from the data view (view can be designed for many different devices). It also solves many problems with upgrades. On the other hand it does not contain so many features as CQRLog.

As Phile wrote, the fact that we are discussing it shows that we like CQRlog and care about its future.


Ah, well, if it's

Ah, well, if it's multithreaded and more than one database operation can be in progress at the same time, then SQLite is not going to work. One could put an interface layer on SQLite that blocked, but that defeats the point of multithreading. Hahaha.

Completely damaged mysql database

Ahoj Petre,
prvni, dekuju za CQRlog. Mam to rad, velmi rad.

OK, switch to English, it's easier for me and keeps the others inside. ;-)

Being the maintainer of a number of opensource projects myself, I really understand your comments about the burden it puts upon one maintaining these. (I became a lousy maintainer myself.)

Anyway, today I lost a complete mysql database with QSO records, and this lets me completely distrust mysql on anything here. (That's why I'm posting in this thread.)

I don't know how it actuallly happened, but the effect now is: I can replay all database files from a backup. (CQRlog and the related mysql process have not been running during the backup.) I remove ib_logfile*, and then mysqld starts up, with some complaints but it starts. As soon as the most recent database is being accessed, mysqld crashes with an abort(), and it then leaves the remaining files in a state where the database server doesn't even start again. Only way to restart it is to remove all files, restore them from backup, remove the transaction logs, and start it. All other CQRlog databases could be dumped and read, but the most recent one always fails. All recommended reocvery steps for mysql require a running server, and eventually crash the server in the same disastrous way.

The crash can be reproduced easily using the mysql CLI too, by just saying "use cqrlog006;".

To me, this lets me completely distrust mysql as a serious solution to store the logs. So in order to continue using CQRlog, I really long for using a different database engine. If sqlite is unusable for some reason, I'd even setup a PostgreSQL instead ...

(I understand that sqlite is unusable in any shared environment. But then, setting up an external databbase is necessary anyway.)

Regarding using sqlite in a multi-threaded environment, this appears to be possible:


73, Jörg

Here's how I understand the

Here's how I understand the problem. Yes, making SQLite thread safe is not difficult within a single app and if this approach was chosen from the start, CQRLog could certainly have been made to work with it. Due to the nature of logging programs, a custom database server based on SQLite could also have been created to support multiple networked machines running CQRLog, like in a contesting setting. This problem can be solved by simply buffering transactions and results and routing them to the right machines.

HOWEVER, CQRLog was not designed in this way, it was designed to use a server engine like MySQL, which provides functionality a step well beyond what you and I could code up easily. MySQL supports concurrency and since this capability was assumed to be available from the start, CQRLog makes use of concurrency in the database server. That's the problem.

My understanding is that changing CQRLog to work without concurrency would be a major rewrite with all that that entails.

ok2cqr's picture
Re: Completely damaged mysql database


I'm sorry, but something really terrible has to happened. MySQL is a robust database that doesn't lose data on daily basic.

When you use cqrlog with data in home directory, it uses MySQL/MariaDB more like embedded database, so it has some features disabled. That was my big design mistake. I should avoid this from the beginning and use only full server version. That would fix a lot of problems by design.

If something happens and MySQL crash, it's really bad idea to delete any files. MySQL can be started in several crash recovery modes and you'll be able extract the data. Unfortunately right now I don't remember the parameter that has to be added to config file, but when you do that, you'll be able to run MySQL server even on crashed database and extract tables using dump.

PostgreSQL would be nice, but MySQL/MariaDB is on almost every hosting/server installed. It's much easier to use it as external database.

Also using SQLite could be fine but I wanted to have database with remove access. I have the CQRLOG database on home server and access it from different computers and locations through VPN. It works very well even on slow Internet connection.

My really bad decision was with data storage location. I should use the MySQL server and not run it's own instance with data in home directory. This will be really hard to fix.

73 Petr, OK2CQR

Re: Completely damaged mysql database

Well, after that event, I'm really no longer convinced that MySQL is a robust database.

What embarasses me most is this "self-destruction" feature: you could get it initially to start, but as soon as you do anything with it, it puts itself into a state where the server does not even start anymore - even if the access was completely read-only. A simple "use cqrlog006;" suffices to damage it completely. I find such behaviour unacceptable. --innodb-force-recovery=999999 helps a bit, but the database in question cannot be recovered still.

So, I've got a pile of files now containing information I can no longer access … I can always restore the current situation from my backup (so there's really nothing lost by twiddling around), but there are some databases within that pile that are simply unwilling to release any of its informatiion.

Regarding to always run against a remote database: well, there are situatiions (like on a lonely hill around) where there's simply no Internet access (other than expensive satellite ;-). Considering that, I always thought that running the database locally is my preferred option. (At home, I've got a well-serviced PostgrSQL server around, including daily backups.) That's why I think the idea to have a local sqlite storage has some merit. If Internet is avaiable, it could easily be replicated from there to a remote backup.

Re: Completely damaged mysql database

Hi All!

Maybe because I have used external database server nearly from beginning of cqrlog usage I have not had any fatal problems (yet....) with logs.
My external database server is actually in the same computer. Later I added database replication to my home network server.

When you install needed packages you already have a running "external" database server in your computer. So it is there leaving just database user definition and database transfers to do to get it in use.

I do not see that so hard to fix problem. Just a script is needed to create database user and do the transfers with mysqldump. I do the database move every night between my home server and my secondary server. And it is not a replication, but true bash scripted mysqldump.

Lada, OK1MLG, has very good suggestions.
We need scripting tool to do the transfer. It does not need to be inside of cqrlog. Just external Bash scripting. It is needed just for few times.
It can be done now, without cqrlog modifications. Users can run it. Test it by selecting external database at start in "database connection". Then switch back to "save log data to local machine" if they feel so (mainly for ~/.config/cqrlog type backups)

Script must be able to create database user with needed privileges and do the dump and move of cqr% databases. User should be able to execute it again and again (check existence of database user before creating, drop tables if exist, at external server)

This is as first exercise. With announcement that in future (v 3.0) it will be the only way to use so it should be tested.

At the time when cqrlog has proper GUI based backup/restore using mysqldump user can stay on external database server. This would be in some version below 3.0
With quick thinking cqrlog can start mysqldump easily from inside of program, but restoring databases needs more thinking. Cqrlog can not be running while doing database restore. Perhaps that is easiest to leave for Bash script to do.

Step 1) Transfer script for user testing. That cannot destroy anything.
Step 2) Cqrlog release with database backup (mysqldump)
Step 3) Announcement that future versions support only external database
Step 4) Release of "Only external DB" cqrlog

Any ideas, comments?


Database separation

Hello gentlemen,

looking back, I'm not sure if it's a good idea to separate a database. Yes, for people who are familiar with Linux administration and know how to install and properly configure MySQL, this solution would be best. But I can imagine large group of people for whom installation and configuration an external MySQL instance (do not mentioned hardening of the instance) will be a nightmare. It won't be easy for them, because they will need another program to install, so they would rather choose another all-in-one application.

Unfortunately, I am not specialist for Lazarus but for example Python has library (SQLAlchemy) for a database abstraction (from developer perspective, an application has the same code for MySQL, Posgresql or SQLite). Do you know whether there is anything similar for Lazarus?

I know, the change of database library means non-trivial CQRLog changes but question is whether these changes may mean a simpler code in the end, which will be ready for the transition to a simple and more suitable SQLite. Also with respect to the challenges awaiting CQRLog in near future such as the transition from GTK2, transition from old hamlib API to new one etc.


I fully agree on that. Having

I fully agree on that. Having the option to use either external servers for MySQL/MariaDB, PostgreSQL or an internal SQLite database would be ideal. Defaulting to SQLite would certainly cause the smallest startup hurdle for not-really-IT-guys (and gals).

I've actually been looking into CQRlogs Lazarus code some time ago (and successfully compiled it myself), but I didn't look how tha database is currently abstracted in the code.

What I can imagine: I know PostgreSQL is quite strict in its interpretation of the SQL standard, whereas MySQL is often sloppy, and tolerates non-conforming SQL statements. So it's possible that all SQL code needs to be audited in order to verify its conformance. Alas, while I can probably recover quite some Pascal knowledge from past days, my SQL knowledge is really very limited.

Hello Jörg,

Hello Jörg,

Unfortunately, CQRLog has no database abstraction layer now. My question was if there is something like SQLAlchemy for Lazarus that could be used in CQRLog. I googled for a while but I can't find anything like that. But it could be because I am not a specialist for Lazarus (in general Pascal)

SQL abstraction

If there is no abstraction, wouldn't it still work if only "generic" SQL statements are used inside the code? I thought that's the idea behind SQL anyway. ;-) (Of course, opening/attaching to the database engine will need different code, obvioiusly.)

Meanwhile, I think I found the reason for my issue: as part of an OS downgrade (actually, recovery from backup), a slightly older version of MariaDB was installed. By using an up-to-date MariaDB (on my main FreeBSD machine, where this was easier to handle), I could dump all databases, and with a slight modification (replacing DEFINER=`` by DEFINER=CURRENT_USER), I could recreate the CQRlog databases on my ham radio laptop. At least, some relief :-), but I'm still upset about the way MySQL / MariaDB behave here: completely rejecting the database due to a newer version would have been fine; "partially" accepting and then destroying it is IMHO a no-go.

SQL abstraction

Just shortly. not SQL as SQL. There are many nuances between SQL implementation in MySQL, Posgresql, Oracle or SQLite, how database is connected, how SQL is executed and fetched. SQL abstraction adds a layer where developer does not solve SQL statements and which database is connected. Developer just sets a filters and uses pre-defined URL as a connection string to database. If you use an abstract layer then you don't care which database is behind your application - SQLite or MySQL. And what is important you can change the database back-end without application changes. But as I wrote above. I don't know if such Lazarus lib exists.

Lazarus DB abstraction

A quick Internet search didn't get me much about a Lazarus DB abstraction, alas.

Maybe Petr knows it better, I guess he's more of a Lazarus expert here …

Hi !

Hi !

I am not database expert at all but when you do Googling use "Freepascal" instead of Lazarus. That may make better hits.


It is fine to have a conversation!
I did test a bit scripting yesterday and moving log databases with mysqldump is very easy. Instead the problem comes when new user for external database is created and granted. If PC user does not know DB server root password (I think often mixed for OS root if database server installation prompts for that) whole process stops.

How ever I think that there are hundreds of cqrlogs running now without major problems using mysql_safe thread with database in user home directory.
The questions are:

Is it worth of forcing only external database server?
It is only good option for those who can use it.

Is it worth of switching database from mysql to something other? What benefits does that offer?
Mysql is used in very large commercial products. It can not be so bad that it is not good enough for Ham radio logbooks.
I am thinking mainly the work amount what switching database from mysql causes to cqrlog source code. Even when Mysql and SQlite seems to use same SQLdb package in FreePascal I'm sure that every query does not work "out of box".

But as said it is good to have conversation about this. Maybe it makes new, better, ideas later.


Exactly. Who knows? We might

Exactly. Who knows? We might come up with an idea. :-)

I just wanted to mention that our criticism of MySQL is not that it's inadequate. It's over-qualified for the job. MySQL and Oracle are the tools you turn to for really huge installations like telephone company record keeping, Walmart corporate headquarters logging every transaction in every Walmart store in real time. Installations like that have one or more full-time people who do nothing but keep the database servers maintained and running properly as they process tens of thousands of transactions per second. They know all the tricks, complexities, and bad behaviors of these powerful database engines.

With today's fast machines, ham radio logging could be done with a flat file. SQLite or BerkeleyDB already have way more power than any ham logging app needs. The only handy thing about using an actual database instead of a flat file is that queries are easily put together in SQL.

MySQL etc.

Well, my criticism was actually indeed about MySQL – and its inability to refuse working on a database created by a newer version, so instead destroying it into unusability.

But that's only partially the point, and I fully agree, for most users, a full-fledged DB server is over-qualified for the job. That's also the major reason why sqlite is now ubiquitous: it offers good enough SQL implementation and performance without the hassles of setting up authentication schemes, handling backups etc. pp.

For those few users who actually need a DB server (MySQL or PostgreSQL or such), they probably already do know how to handle setting up a separate user and database inside the server. All they'd need in the application is the knobs to attach to the server (user name, password or maybe certificate,database name, server host name and port).

That's right. I wasn't

That's right. I wasn't thinking about that aspect, and it's a big, dangerous aspect. A stable and small file format is nice. I'm sure there's a good reason that an empty MySQL db is 220 megabytes in size, but I don't know what it is.

I have to agree that

I have to agree that currently it is better to use SQLite from several points of view (easy installation, simple package management, simple administration, simple backup). But we must not forget that it has been many years since Peter(OK2CQR) began his CQRLOG. As he wrote few comments above, he also tried the SQLite variant and he decided that MySQL is better choice. Do not forget that SQLite was not what it is today. Unfortunately, nowadays I would rather tend to rework CQRLog to SQLite backend because MySQL (or Posgresql) has become an adult database. I'm sorry for the expression but from my perspective, currently, MySQL is too "big cannon" for such small CQRLog project.

I think we can talk about it for a long time. Changing the database backend would mean easier administration, easier packaging but it will cost a significant rework/redesign of CQRLog. Another question is, has FreePascal well-implemented SQLite support? Another think is what about obsoleted GTK2 usage etc. But it is good that we are talking about this here because it is a way to improve or change something in a coordinated way.

Yeah, no one is pointing

Yeah, no one is pointing fingers. Anyone with experience in engineering knows that as technology changes, the decision landscape changes. Looking back over the past 45 years I can see numerous things that I would have done differently had I known the future.

Ten years ago, I too would probably have chosen MySQL If I was designing a long-term application like a ham logging program, the db has to either be something I build myself or something I expect to be around long into the future. While SQLite had been around for years ten years ago, it had not yet caught on and become ubiquitous. I probably would not have chosen it, worried that in ten years it might be a dead project. Most projects end up as dead projects. MySQL was ubiquitous and lots of big systems were using it. Go with the proven tool.

Today's landscape is very different. SQLite has proven itself and has caught on like crazy, in many cases for exactly the reasons we are unhappy with MySQL today. Nobody would have bet money on that ten years ago.

A little snooping around turned up this: https://wiki.lazarus.freepascal.org/SQLite although I'm not exactly sure what I was looking for. Lol.

lianass (not verified)
I think every student has

I think every student has encountered problems in writing dissertations. This is a very difficult job for me. Therefore, without hesitation, I turned to the specialists https://www.phdresearchproposal.org/ With them, I am confident in the quality and literacy of the writing.