Migration to vermaden.wordpress.com Place
23:03 | 0 Comments

for those that did not noticed I moved to https://vermaden.wordpress.com/ page.
This blog will remain here as legacy.
Regards,
vermaden
for those that did not noticed I moved to https://vermaden.wordpress.com/ page.
This blog will remain here as legacy.
Regards,
vermaden
The sysutils/automount port is quite useful for automounting removable
storage and ofers a lot of helpful information in its /var/log/automount.log log file,
but it can grow quite large in size after heavy usage.
# du -sh /var/log/automount.log 340M /var/log/automount.log
To make FreeBSD's newsyslog(8) it auto-rotate it we need to add it to the
/etc/newsyslog.conf config file as below.
# echo '/var/log/automount.log 644 7 100 @T00 JC' >> /etc/newsyslog.conf
Now we should restart the newsyslog(8) daemon ...
# /etc/rc.d/newsyslog restart Creating and/or trimming log files.
... and trigger trim of the /var/log/automount.log file.
# newsyslog -v Processing /etc/newsyslog.conf /var/log/all.log <7J>: does not exist, skipped. /var/log/amd.log <7J>: does not exist, skipped. /var/log/auth.log <7J>: --> will trim at Tue Dec 31 23:00:00 2013 /var/log/console.log <5J>: does not exist, skipped. /var/log/cron <3J>: size (Kb): 44 [100] --> skipping /var/log/daily.log <7J>: does not exist, skipped. /var/log/debug.log <7J>: size (Kb): 71 [100] --> skipping /var/log/kerberos.log <7J>: does not exist, skipped. /var/log/lpd-errs <7J>: size (Kb): 1 [100] --> skipping /var/log/maillog <7J>: --> will trim at Thu Jun 27 00:00:00 2013 /var/log/messages <5J>: --> will trim at Tue Dec 31 23:00:00 2013 /var/log/monthly.log <12J>: does not exist, skipped. /var/log/pflog <3J>: does not exist, skipped. /var/log/ppp.log <3J>: size (Kb): 53 [100] --> skipping /var/log/security <10J>: size (Kb): 1 [100] --> skipping /var/log/sendmail.st <10>: age (hr): 62 [168] --> skipping /var/log/utx.log <3>: --> will trim at Mon Jul 1 05:00:00 2013 /var/log/weekly.log <5J>: does not exist, skipped. /var/log/xferlog <7J>: size (Kb): 1 [100] --> skipping /var/log/automount.log <7J>: size (Kb): 348293 [100] --> trimming log.... Signal all daemon process(es)... Notified daemon pid 1046 = /var/run/syslog.pid Pause 10 seconds to allow daemon(s) to close log file(s) Compress all rotated log file(s)...
Now the file size is more 'normal' and will never grow that much size in time.
# du -sh /var/log/automount.log 1.0k /var/log/automount.log
The rotated old log is also compressed.
# ls -lh /var/log/automount* -rw-r--r-- 1 root wheel 76B 2013.06.26 13:46 /var/log/automount.log -rw-r--r-- 1 root wheel 12M 2013.06.26 13:46 /var/log/automount.log.0.bz2
I recently came across HP Universal Discovery software but its very complicated, requires separete server and database and is far from complete when it comes to gather data from the UNIX and Linux systems.
So I sit one day and thought what information about a system You need to tell that You know enought about it and that You can take responsibility for its uptime ... and of course how to fast end efficently gather that information from a running system.
After several hours I already had a prototype which gather information about UNIX and Linux systems, without idea for a better name I end up with gatherinfo. Its simple script gatherinfo.sh with dependency on POSIX sh(1) and echo/cat/sed, all other commands are used to gather info from the running system. The result of the script is report-like HTML file named gatherinfo.sh.$( hostname ).htm with outputs of the desired commands.
As script is running it shows which command is now being processed, so You know why it may take that much time, usually its work is done in less then a minute. And yes, it does need to run as root.
# gatherinfo.sh top -d 1 sockstat ps ax ps aux ps auxwww ps auxefw pstree -A pstree -A -a lsof cat /etc/hostid cat /etc/freebsd-update.conf grep enable /etc/rc.conf kldstat kldstat -v jls vmstat 1 5 (...)
And the end result looks like that one below. You can expand/collapse each command so its not several kilemeters long ;)
Of course its far from complete after two days of messing with it, but I will add more and more useful commands.
Currently it gathers information from FreeBSD and Linux but my personal TODO contains operating systems like AIX, Solaris, HP-UX and of course other BSDs. On the other side I will also add cluster and/or HA software like FreeBSD's HAST/CARP, Linux RHCS, Oracle Clusterware, Sun Cluster, HP Serviceguard and AIX PowerHA. Also various Veritas storage and HA sollutions are on my TODO list.
I have created https://github.com/vermaden/gatherinfo repository for the development.
Feel free to submit your favorite commands ;)
IBM employee Nigel Griffiths published and regularly update an interesting article [1] on developerWorks about how a web server works. He actually wrote tiny web server named nweb in 200 lines C code and explains how each part of the code is used to serve the static content. Below is small quote from the article introduction.
Have you ever wanted to run a tiny, safe web server without worrying about using a fully blown web server that could be complex to install and configure? Do you wonder how to write a program that accepts incoming messages with a network socket? Have you ever just wanted your own Web server to experiment and learn with? Further updates in 2012 to support recent web-server and browser standards and a code refresh.
Well, look no further - nweb is what you need. This is a simple Web server that has only 200 lines of C source code. It runs as a regular user and can't run any server-side scripts or programs, so it can't open up any special privileges or security holes.
This article covers:
At the bottom of the page You can download the nweb source code (file nweb23.c after extract), the code compiles and runs on the following systems:
... but it will not compile on FreeBSD:
# gcc nweb23.c nweb23.c: In function 'main': nweb23.c:165: error: 'SIGCLD' undeclared (first use in this function) nweb23.c:165: error: (Each undeclared identifier is reported only once nweb23.c:165: error: for each function it appears in.) nweb23.c:169: error: too few arguments to function 'setpgrp' % clang nweb23.c nweb23.c:165:15: error: use of undeclared identifier 'SIGCLD' (void)signal(SIGCLD, SIG_IGN); /* ignore child death */ ^ nweb23.c:169:16: error: too few arguments to function call, expected 2, have 0 (void)setpgrp(); /* break away from process group */ ~~~~~~~ ^ /usr/include/unistd.h:455:1: note: 'setpgrp' declared here int setpgrp(pid_t _pid, pid_t _pgrp); /* obsoleted by setpgid() */ ^ 2 errors generated.
To fix the undeclared SIGCLD error we need to add this after the #include (...) and #define (...) lines.
#ifndef SIGCLD # define SIGCLD SIGCHLD #endif
So this is what we changed so far comparing to the nweb23.o.c original source code file.
--- nweb23.o.c 2012-08-19 23:15:45.000000000 +0200 +++ nweb23.c 2013-03-12 14:07:16.021757627 +0100 @@ -15,6 +15,9 @@ #define LOG 44 #define FORBIDDEN 403 #define NOTFOUND 404 +#ifndef SIGCLD +# define SIGCLD SIGCHLD +#endif struct { char *ext;
Lets try the compilation now.
% gcc nweb23.c nweb23.c: In function 'main': nweb23.c:172: error: too few arguments to function 'setpgrp' % clang nweb23.c nweb23.c:172:16: error: too few arguments to function call, expected 2, have 0 (void)setpgrp(); /* break away from process group */ ~~~~~~~ ^ /usr/include/unistd.h:455:1: note: 'setpgrp' declared here int setpgrp(pid_t _pid, pid_t _pgrp); /* obsoleted by setpgid() */ ^ 1 error generated.
So we have one error left. The setpgrp(); function in the code is executed without arguments while it needs two setpgrp(pid_t _pid, pid_t _pgrp);, lets fix that. In line 172 we will put (void)setpgrp(getpid(),getpid()); instead of (void)setpgrp(); function.
% gcc nweb23.c % echo $? 0 % clang nweb23.c % echo $? 0
Viola! It now compiles, here is the diff(1) for both changes.
--- nweb23.o.c 2012-08-19 23:15:45.000000000 +0200 +++ nweb23.c 2013-03-12 14:20:10.561753772 +0100 @@ -15,6 +15,9 @@ #define LOG 44 #define FORBIDDEN 403 #define NOTFOUND 404 +#ifndef SIGCLD +# define SIGCLD SIGCHLD +#endif struct { char *ext; @@ -166,7 +169,7 @@ (void)signal(SIGHUP, SIG_IGN); /* ignore terminal hangups */ for(i=0;i<32;i++) (void)close(i); /* close open files */ - (void)setpgrp(); /* break away from process group */ + (void)setpgrp(getpid(),getpid()); /* break away from process group */ logger(LOG,"nweb starting",argv[1],getpid()); /* setup the network socket */ if((listenfd = socket(AF_INET, SOCK_STREAM,0)) <0)
Lets now try how (and if) it works.
% ./a.out hint: nweb Port-Number Top-Directory version 23 nweb is a small and very safe mini web server nweb only servers out file/web pages with extensions named below and only from the named directory or its sub-directories. There is no fancy features = safe and secure. Example: nweb 8181 /home/nwebdir & Only Supports: gif jpg jpeg png ico zip gz tar htm html Not Supported: URLs including "..", Java, Javascript, CGI Not Supported: directories / /etc /bin /lib /tmp /usr /dev /sbin No warranty given or implied Nigel Griffiths nag@uk.ibm.com
The nweb server seems to work, lets try to serve the directory with extracted files.
% ./a.out 8080 ./ % echo $? 0 % ps ax | grep a.out 40755 7 S 0:00.00 ./a.out 8080 ./ % netstat -a -n -f inet | grep 8080 tcp4 0 0 *.8080 *.* LISTEN
It seems to work, lets try that in the browser.
I did not thought that I would ever quote the default Apache page but, "It works!".
[1] http://www.ibm.com/developerworks/systems/library/es-nweb/index.html