SYSTEM WARNING: 'date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone.' in '/usr/share/mantis/www/core.php' line 264
Notes |
|
|
Hi,
here we have the same problem !!!
at the dashboard the HTTP Proxy status is off !!! |
|
|
|
Hi,
I have the same problem
do not know if this has to do but, i found these errors in /var/log/httpd/error_log
Use of uninitialized value in substitution (s///) at /home/httpd/html/proxy.pac line 287.
Use of uninitialized value in substitution (s///) at /home/httpd/html/proxy.pac line 163.
Use of uninitialized value in substitution (s///) at /home/httpd/html/proxy.pac line 160.
Use of uninitialized value in substitution (s///) at /home/httpd/html/proxy.pac line 163. |
|
|
|
Hello again,
the problem is recurrent !!! now, we need restart the dansguardian time-to-time..
the last time, I see many TIME_WAIT connections in the port 3128 (squid), 8080(squid??) and 9998 (havp)
=====================================
# netstat -putan | egrep '(8080)' | wc -l
667
# netstat -putan | egrep '(3128)' | wc -l
213
# netstat -putan | egrep '(999)' | wc -l
1331
# netstat -putan | grep TIME_WAIT | egrep '(8080)' | wc -l
589
# netstat -putan | grep TIME_WAIT | egrep '(3128)' | wc -l
103
# netstat -putan | grep TIME_WAIT | egrep '(999)' | wc -l
1165
=====================================
the log files don't show any thinks...
any services "work" checkink in the SO
=====================================
# /etc/init.d/dansguardian status
Parent DansGuardian pid:3696
# /etc/init.d/squid status
squid (pid 7194 7192) is running...
# /etc/init.d/havp status
havp (pid 2666 2665 2664 2663 2662 2661 2660 2659 2658 2657 2656 2654 2653 2652 2651 2650 2649 2648 2647 2646 2645 2644 2643 2642 2641 2640 2639 2638 2637 2636 2635 2634 2633 2632 2631 2630 2629 2628 2627 2626 2624) is running...
# /etc/init.d/clamd status
clamd (pid 7180) is running...
=====================================
maybe the port where the proxy work (3128) is not a default ???
I believe that the problem is with antivir, but I can't found it (yet).
mmmm... that is bad !! :-( |
|
|
|
Setting the following values in /etc/dansguardian/dansguardian.conf fixed it for me:
maxchildren = 400
minchildren = 32
minsparechildren = 15
preforkchildren = 8
maxsparechildren = 64
maxagechildren = 4000
Looks like these values where changed from 2.5 going forward, severely reduces the number of connections that can be open at the same time. Change it in the tmpl file if you want the values to stick. |
|
|
|
Hi,
that is interesting too
http://bugs.endian.com/view.php?id=3432 [^]
In my case, when the user visit a streaming site..
the havp take many time (and all threads) to process each one of that connections.
:-( |
|
|
|
Hi,
I am using endian for the first time, i dont have much idea abt endian,
someone please help me how to edit $MAXCHILDREN value in /etc/dansguardian/dansguardian.conf.tmpl file.
maxchildren = $MAXCHILDREN
minchildren = $MINCHILDREN
minsparechildren = $MINSPARECHILDREN
preforkchildren = $PREFORKCHILDREN
maxsparechildren = $MAXSPARECHILDREN
maxagechildren = $MAXAGECHILDREN |
|
|
|
@ anand_apscan
You can do the following:
nano /usr/lib/efw/dansguardian/default/settings
change the values as below:
MAXCHILDREN=500
MINCHILDREN=128
MINSPARECHILDREN=32
PREFORKCHILDREN=16
MAXSPARECHILDREN=256
MAXAGECHILDREN=10000
Also if you are facing slow browsing issue change the following values:
nano /var/efw/havp/settings
change 2 values as below
MAXSERVERS=150
SERVERNUMBER=50 |
|
|
|
I have reason to believe it has something to do with havp
/var/efw/havp/settings does not exists on my 2.5.1 and creating it does not solve the problem. Changing the dansguardian settings to above didn't do the trick for me neither.
Disabling anti virus on all content filter profiles solved the performance issue |
|