Forums | Mahara Community
Support
/
cronjob again
19 October 2013, 3:06
Hello,
Since I've been using 1.7.2 the cronjob is not working that way it used to work before. It stops running from time to time. To fix this I have to delete that entry in mysql config that shows, that the cron has not worked to its end.
Mahara 1.7.2 ((2013032209)
PHP 5.3
Memory Limit: 500MB
Time to Execute: 120
It's a hosted instance and therefor the resources are limited. But that worked for over a year without problems
And the number of students working with Mahara is limited to 50. So there shouldn't be a resource problem although I worry that it might be one. Mahara seems to use massive resources to work properly, doesn't it?
But maybe there is a work around to handle that cronjob problem, because I won't run a daily mysql fix.
Does anyone have an idea? I would appreciate that very much
Thanks and regards
Conrad
20 October 2013, 12:50
Maybe this error log can help experts to solve the problem:
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * log_message("Failed to get a recordset: mysql error: [2006: MyS...", 8, true, true) at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/errors.php:109, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * log_warn("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/errors.php:749, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * SQLException->__construct("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/dml.php:940, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * delete_records("site_data", "type", "disk-usage") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/mahara.php:2907, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * cron_site_data_daily() at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/cron.php:158, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * delete_records("site_data", "type", "disk-usage") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/mahara.php:2907, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * cron_site_data_daily() at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/cron.php:158, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * log_message("Failed to get a recordset: mysql error: [2006: MyS...", 8, true, true) at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/errors.php:109, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * log_warn("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/errors.php:749, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * SQLException->__construct("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/dml.php:905, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * set_field_select("cron", "nextrun", "2013-10-20 23:51:00", " WHERE "id" = ? ", array(size 1)) at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/dml.php:885, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * set_field("cron", "nextrun", "2013-10-20 23:51:00", "id", "14") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/cron.php:170, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * set_field_select("cron", "nextrun", "2013-10-20 23:51:00", " WHERE "id" = ? ", array(size 1)) at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/dml.php:885, referer: http://cronjob.de/?id=xxxxxx
[Sun Oct 20 00:11:37 2013] [error] [client 212.227.101.211] * set_field("cron", "nextrun", "2013-10-20 23:51:00", "id", "14") at /home/.sites/xxx/xxxxxxx/web/eportfolio/lib/cron.php:170, referer: http://cronjob.de/?id=xxxxxx
24 October 2013, 23:09
These cronjobs are stuck in _config:
- _cron_lock_core_activity_process_queue
- _cron_lock_core_cron_site_data_daily
- _cron_lock_notification_emaildigest_send_digest
It would be great if somehome could give help and support.
Thanks, Conrad
25 October 2013, 12:18
Hi Conrad,
Yes, the Mahara cron is split up into multiple tasks, that are stored in the tables in the database that end with "cron". So there's the main "cron" table that has tasks for Mahara core, and then there's, for instance, "artefact_cron" that has cron tasks for the individual artefact plugins.
Each cron task has its own cron lock entry in the database, as you've noticed, which is designed to prevent two copies of the same cron task from running at the same time. The downside to this is that If the cron job errors out in the middle of a cron task running, then the lock doesn't get cleared and that task doesn't run again. The cron job takes this into account and deletes a lock once it is 24 hours old.
The three cron tasks you've listed are probably the most resource-intensive. "core_activity_process_queue" handles the sending of notifications, and "notification_emaildigest_send_digest" sends out digest emails, so those two are pretty important because they generate the emails students receive. "core_cron_site_data_daily" just gathers statistics for the admin report pages. These do sometimes fail due to resource issues, and I suspect once that happens it can snowball, since the task doesn't run for 24 hours, allowing a 24 hour backlog to build up, which then causes it to crash again, and so forth. So one thing you could try is dropping that delay time to 1 or 2 hours. It's in the "cron_lock()" function in lib/cron.php, on line 511.
On the other hand, if you've only got 50 students, it's quite unlikely these would run out of resources and error out. The error message that you showed above, is erroring out while trying to execute a simple SQL statement: "delete from site_data where type='disk_usage';", so that's a bit strange. That would appear to be part of the "core_cron_site_data_daily()" task. Do you have error logs from where the other two tasks error out?
Cheers,
Aaron
26 October 2013, 5:32
Hi Aaron,
Thanks for your support and tip, to drop 24 hour delay. I dropped the delay to 1 hour. But, perhaps, this will lead to a next challenge, at least after 23 days …
I don't know whether this log refers to those two task errors. Here is the log clipping:
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * SQLException->__construct("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/***/*******/web/*****/lib/dml.php:940, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * delete_records("site_data", "type", "disk-usage") at /home/.sites/***/*******/web/*****/lib/mahara.php:2907, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client 212.227.101.211] * cron_site_data_daily() at /home/.sites/***/*******/web/*****/lib/cron.php:158, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * delete_records("site_data", "type", "disk-usage") at /home/.sites/***/*******/web/*****/lib/mahara.php:2907, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * cron_site_data_daily() at /home/.sites/***/*******/web/*****/lib/cron.php:158, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * log_message("Failed to get a recordset: mysql error: [2006: MyS...", 8, true, true) at /home/.sites/***/*******/web/*****/lib/errors.php:109, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * log_warn("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/***/*******/web/*****/lib/errors.php:749, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * SQLException->__construct("Failed to get a recordset: mysql error: [2006: MyS...") at /home/.sites/***/*******/web/*****/lib/dml.php:905, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * set_field_select("cron", "nextrun", "2013-10-24 23:51:00", " WHERE "id" = ? ", array(size 1)) at /home/.sites/***/*******/web/*****/lib/dml.php:885, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * set_field("cron", "nextrun", "2013-10-24 23:51:00", "id", "14") at /home/.sites/***/*******/web/*****/lib/cron.php:170, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * set_field_select("cron", "nextrun", "2013-10-24 23:51:00", " WHERE "id" = ? ", array(size 1)) at /home/.sites/***/*******/web/*****/lib/dml.php:885, referer: http://cronjob.de/?id=xxxxxx
[Thu Oct 24 12:16:59 2013] [error] [client ***.***.***.***] * set_field("cron", "nextrun", "2013-10-24 23:51:00", "id", "14") at /home/.sites/***/*******/web/*****/lib/cron.php:170, referer: http://cronjob.de/?id=xxxxxx
Cheers,
Conrad
31 October 2013, 16:09
Hi Conrad,
You can tell which cron task errored out by looking carefully at which functions are being called in the error message. The error messages that Mahara prints out contain a PHP "error stack", which indicates which function PHP was in when it errored out, and which function called that function, and which function called that function, and so on up to the initial script that was requested. So in the error you pasted in, if you ignore the Apache stuff at the beginning with the date and the client IP address, you can see that the tsack is basically:
- delete_records() in lib/mahara.php on line 2907
- ... called by cron_site_data_daily() on line 158 of cron.php
Hm, and then strangely that repeats a few times.
Anyway, the way this relates to the cron tasks, is that each cron tasks's name is actually the name of the PHP function that is called for that cron task. So if you see "cron_site_data_daily()" in the stack, you know it was that task being executed:
- activity_process_queue()
- cron_site_data_daily()
- send_digest() (in notification/emaildigest/lib.php)
It appears you're invoking the cron job by calling it via its web address. One thing you can try, if you have sufficient access on your web server, is to instead invoke the cron job at the command-line, using the command-line PHP utility.
Cheers,
Aaron
06 November 2013, 21:01
Hi Aaron,
thanks for your answer. I can't run the cron by command-line in 5 minutes interval. The problem persists. After three, four days the cron stucks.
Significant is, that the daily statistics shows the date of the day, not those of the day before - unaffected by running cronjob or not. That's strange and this didn't occur ever before.
Can I influence the performance of daily statistics? Where are data stored?
Thanks,
Conrad
10 November 2013, 8:14
Does anyone have a clue how to solve the daily cron problem (bug)?
11 November 2013, 13:38
Hi Conrad,
Did you see the same errors when trying to run the cron at the command line? Try doing this query...
update cron set nextrun = null where callfunction = 'cron_site_data_daily'
... and then run the cron from the command line. That way you can make sure that particular task actually runs during your command-line execution. The point of doing this would be 1) to see if you can get clearer error messages when running it at the command-line, and 2) if it works on the command-line, then set a crontab to run it from the command-line all the time instead of through the web. See https://wiki.mahara.org/index.php/System_Administrator%27s_Guide/Cron_Job#Calling_cron.php_via_the_command_line
Another alternative, is that if you can live without the daily stats report, you could just disable the cron_site_data_daily() cron task. The simplest way to do that would be to delete its record from the cron table:
delete from cron where callfunction = 'cron_site_data_daily';
As for what it does, the simplest answer is to just look at the source code for the function cron_site_data_daily() in htdocs/lib/mahara.php on line 2834. In brief, it runs queries against the "usr", "group", and "views" tables to see what's going on in the site, and then it inserts statistics into the "site_data" table. It also reads the count of how many times each page has been viewed from a log file under the Mahara dataroot and stores that into the DB, then wipes that log file, and updates the count of how many times each view was visited. Next it checks how big the Mahara dataroot directory is, and it inserts a statistic about that into the "site_data" table. And lastly it draws some graphs about the data in the "site_data" table and stores them as images under the dataroot directory so they're ready to be viewed on the stats report page.
One thing I do note about this process is that we are apparently redundantly storing Page visits in a log file in the dataroot ($cfg->dataroot/log/views.log) and then inserting them into the database during the cron. It seems like we should just insert them directly into the database in the first place. So, I may correct that.
Anyway, the queries it makes against the usr, groups, and views tables can be pretty intensive if you have a large site (like 10,000s of users), but your site sounds too small for that to be a problem. And the error messages you saw indicated that it was erroring out while trying to delete records from the "site_data" table, which should not be an intensive operation. So I suspect that the problem you're having may not be a resource-usage issue.
Cheers,
Aaron
12 November 2013, 20:05
Hi Aaron,
thanks for your elaboration that helped me to understand how and why daily statistics work that way.
Because my mahara instance is running on a hosted server I have to ask my provider to get the cronjob running in command line.
Nevertheless there seems to be other malfunctions. The daily statistics should write the statistics of the day before. In my case the system tries to read and write the data of the day and therefore shows zero. I tried to solve this by deleting the last entry of the day in the database. But next day daily statistic are read and written again from the findings of the actual date . So; what bug causes this and how can I fix it?
Thanks and Regards
Conrad
- «Previous page
- 1
- 2
- »Next page