SOLVED - Cannot delete dupes from log

10 posts / 0 new
Last post
a65dr
SOLVED - Cannot delete dupes from log

Good day all.
After mistakenly importing my main backup adi file (instead of exporting it) I have 37k dupes in my log.
I tried the delete dupes from QSO window file menu but get this error:

----------------------------
ERROR: : Error executing query: Cannot truncate a table referenced in a foreign key constraint (`cqrlog001`.`log_changes`, CONSTRAINT `log_changes_ibfk_1` FOREIGN KEY (`id_cqrlog_main`) REFERENCES `cqrlog001`.`cqrlog_main` (`id_cqrlog_main`))
----------------------------

After deleting many dupes from the top of the log this window takes a little longer to appear so it *might* bu when it gets to the first dupe to remove.

I saw this was reported back in 2012 and 2014, and I have tried the workaround of mark all qso as uploaded to all logs, but this doesn't fix the issue.

cqrlog version is '2.0.5-3ubuntu1'

IS there anything else I can try? Deleting the dupes manually will take many months.

73
PAUL
A65DR

oh1kh
Cannot delete dupes from log

Hi Paul!

I would do fixing with command terminal and mysql. I know it is a "nerd" job, but no can do. In this case...

First of all I hope you have done complete backup right after you noticed the error. Doing it will allow yo u play with different rescue ways and if they fail, again start from the disaster moment.

So first a copy of everything using command terminal:

cp -a ~/.config/cqrlog ~/.config/cqrlog_safe

It is also wise to copy ~/.config/cqrlog_safe to USB stick or external HD.

You can also use file manager GUI if you set "show hidden files and folders" to ON so that you can see the ".config" folder.

After doing that open cqrlog so that database engine will start.
Then from command terminal give:
mysql --socket=/home/saku/.config/cqrlog/database/sock cqrlog001

Replace "saku" with your username and "001" with your log number in use with this faulty log.

You should get mysql (or MariaDB) prompt:

MariaDB [(none)]>

Now if you know the last qso call and date before this faulty import give this command (I use my last logged qso here):

MariaDB [cqrlog004]> select id_cqrlog_main,callsign,qsodate from cqrlog_main where callsign="DL4PU" and qsodate="2020-04-15";
+----------------+----------+------------+
| id_cqrlog_main | callsign | qsodate |
+----------------+----------+------------+
| 4 | DL4PU | 2020-04-15 |
| 7 | DL4PU | 2020-04-15 |
+----------------+----------+------------+
2 rows in set (0.001 sec)

MariaDB [cqrlog004]>

You should see now 2 entries if you have all qsos duplicated.

Look at the smaller "id_cqrlog_main". If it is the last valid qso of your log all imported qsos will have bigger "id_cqrlog_main".

First we look that there are those qsos:

MariaDB [cqrlog004]> select id_cqrlog_main,callsign,qsodate from cqrlog_main where id_cqrlog_main > 4 ;
+----------------+----------+------------+
| id_cqrlog_main | callsign | qsodate |
+----------------+----------+------------+
| 5 | OH1KH | 2019-09-30 |
| 6 | G0PNM | 2020-04-15 |
| 7 | DL4PU | 2020-04-15 |

............ a long list...............finally..........

+----------------+----------+------------+
3 rows in set (0.001 sec)

MariaDB [cqrlog004]>

So row count shows how many qsos there are with bigger "id_cqrlog_main".

Then we delete them.
REMEMBER NOW THAT THE BACKUP MUST BE IN ORDER BEFORE DOING THIS!

MariaDB [cqrlog004]> delete from cqrlog_main where id_cqrlog_main>4;
Query OK, 3 rows affected (0.001 sec)

MariaDB [cqrlog004]>

Affected row count should be same as count of bigger "id_cqrlog_main".

Then you can go to cqrlog/qso_list and update log view. You should now see your log in way it was before acciedent. Qso and DXCC count should be as was.

--
Saku
OH1KH

ei2idb (not verified)
Cannot delete dupes from log

Hi Paul,
As you said you have adi file backup. So you can import it to a new database or clear your corrupted database and then import form adi file. Probably both ways will need to upload your all qsos to logs like clublog and so on if you use them. But do this not from cqrlog but upload adi file through websites. After that mark all qsos as uploaded to all logs in qso list window.
BTW... backup, backup and once again backup....

a65dr
Wow, thank you for the super

Wow, thank you for the super fast replies and helpful input.
Hi Saku,
(NB I use MySQL for the DB, not the in-built MariaDB)
This is just great. I'm at work at the moment so don't have the time to try the detail just now, but if I understand this Iin principle) each QSO entry also gets a (hidden) cqrlog serial number (id_cqrlog_main) and I just need to find where the dupes start and then delete all newer (higher number) with the SQL 'command' you mention.
I just have one import from WSJT since the dupes were created so I can rebuild the missing after the delete quite easily.
For backup see also my reply to Slav (below).
I'll comment again when done and let you know how I get on.
Thank you for the suggestion.

Hi Slav,
I have been using the adi export of the whole log as a disaster recovery backup. Been doing that for 6 years and so far I have not really needed it, but we know lack of backup always catches you one sooner or later.
I have been backing up quite regularly to adi.
My problem with restoring from the adi, is my adi (while adequate) does not have all the QSO data because I have a limited export field list. The limited field list is because every few days I also export a HTML based log for my QSL manager, and I prefer a table with just the data he requires, and I don't want to change the export field list in settings each time I export.
I do copy the entire log adi to a separate machine as well, so I believe the minimum data is safe enough.
The task here is preserving the extra data (like qso comments and off times) from the original log.
But this episode does illustrate that I need a better (complete) backup scheme that is fairly automated. If I did that going back to known good data would have been easier and avoided the need for this recovery effort. Since I am using MySQL I would have to copy (probably rsync) /var/lib/msql/ out to a safe place. Or maybe I can export a complete backup using SQL? Which do you think would be best (and easiest to script/schedule) for a non-sql skilled person?

73,

Paul H
A65DR, G7SLP etc

oh1kh
Cannot delete dupes from log

Hi Paul!

Do I understand you have separate SQL server "NB I use MySQL for the DB, not the in-built MariaDB" ?
If you have Mysql or MariaDB, but use "save log data to local machine" then you run the mysql_safe thread when cqrlog starts and all goes as stated even when mysql console prompt says "mysql" or "mariadb"

But if you have separate DB server (at port 3306) on same PC or elsewhere then maybe mysqldump is the way to backup logs.
Note that there are start.sh and stop.sh scripts (placed in ~/.config/cqrlog directory) that are run at cqrlog star and stop.
By making a stop.sh that fires mysqldump you can do backups on close of cqrlog.

Putting this line to stop.sh will do the dump (it is part of my server's backup script)

$(/usr/bin/mysql -B -N -e 'show databases like "cqr%"' | /usr/bin/xargs /bin/echo -n mysqldump --databases) > /tmp/mycqrlogsdump.sql

you can test it from command line, too.

ADIF backups are not complete, so if one wants full backup something else have to be done.

Things are easy if "save log data to local machine" is checked:

just copy whole ~/.config/cqrlog directory with all files and folders to other name and/or device.
restoring is as easy: just delete first the existing one and copy saved folder structure back.
cp -a
does full copy with command d console.

--
Saku
OH1KH

ei2idb (not verified)
Wow, thank you for the super

Hi Paul,
If I can be helpfull (obviously not so much like Saku, hi) thats fine.
Now I see that you need more serious solution than mine.
I am not too good with mysql or Mariadb so what I am doing is:
First, I am using auto backup from cqrlog Preferences -> Exit & auto backup. This file is synchronized to cloud after each closing of logger.
For hard disaster case I am using Duplicity - built in ubuntu tool making safe auto backups. This backups my full /home dir and others daily. Only first backup is full backup, next are incremental so it is not taking too much time. this backup I am keeping on usb stick and other well known cloud.

a65dr
Solved - deleting dupes from log

Hi Saku
The SQL direct delete worked just fine. I am happy :-)
The QSL server is on localhost:3306 (the usual I guess). I had to add --password to the mysql console start and enter my sql password but no big issue.
The backup command didn't work, perhaps because of the password requirement, but to avoid topic drift I'll start a new topic for backups because I think it is a discussion worth having and on a quick search of the forum I don't find quite the topic in sql terms.
I really appreciate you spending time to help me with this. Thank you.

Hi Slav,
Thank you for your input also. It gave me the prompt to set up a proper backup now the dupes issue is fixed. I'm grateful for all helpful replies.

73,

Paul H
A65DR, G7SLP etc

oh1kh
Solved - deleting dupes from log

Hi Paul!

Nice to hear all worked out ok.

--
Saku
OH1KH

ok2cqr
ok2cqr's picture
Re: SOLVED - Cannot delete dupes from log

Hi, there is no need to go to SQL console. Just go to Log upload menu and Choose Mark all QSO as uploaded. It should remove all records from log_changes table and the function from menu should work.

73 Petr, OK2CQR

a65dr
Hi Petr,

Hi Petr,

Thanks for your reply. I saw this on earlier posts and tried it before posting. Hence why I mentioned in my initial post:

"I saw this was reported back in 2012 and 2014, and I have tried the workaround of mark all qso as uploaded to all logs, but this doesn't fix the issue."

I tried it a couple of times but it didn't work for me.

Happy I was able to identify and delete the unwanted DB records.

73,

Paul H
A65DR, G7SLP etc