/[monit]/monit/monit.pod
ViewVC logotype

Diff of /monit/monit.pod

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.61 by martinp, Fri Feb 14 08:22:33 2003 UTC revision 1.62 by hauk, Mon Feb 17 14:18:01 2003 UTC
# Line 167  The I<quit> argument will kill a running Line 167  The I<quit> argument will kill a running
167  of waking it up.  of waking it up.
168    
169    
 =head2 monit lock file  
   
 monit utilize a lock file to prevent concurrent runs in daemon  
 mode. That is, only one monit daemon is permitted per user. The  
 lock file contains the process id (pid) from the current running  
 monit daemon. If monit is run by the root user the location of  
 the lock file is either I</var/run/monit.pid> or  
 I</etc/monit.pid> depending on the operating system. For a  
 non-root user the location of the lock file is  
 I<$HOME/.monit.pid>. The lock file is removed when a monit daemon  
 is stopped.  
   
 Normally it's not necessary to consider the location of the lock  
 file but in certain special situations, you may need to control  
 the location. For instance if you run monit as root on two  
 different machines but with the same file system you will need to  
 set the location of the monit lock file explicit per monit daemon  
 via the I<-p> option.  You can also set the location of the lock  
 file on a more permanent basis via this global set-statement in a  
 monitrc control file: (keywords are in capital)  
   
    SET PIDFILE {pidfile}  
   
 For instance, I<set pidfile /run/monit.pid>.  
   
   
170  =head1 INIT SUPPORT  =head1 INIT SUPPORT
171    
172  Monit can be run and controlled from I<init>. In case monit  Monit can be run and controlled from I<init>. In case monit
# Line 248  Show the status of a program group: Line 222  Show the status of a program group:
222    monit -g <groupname> status    monit -g <groupname> status
223    
224    
225  =head1 MONITORING MODE SELECTION  =head1 MONITORING MODE
226    
227  Monit supports three monitoring modes per service: active,  Monit supports three monitoring modes per service: active,
228  passive and manual. See also the example section below for usage  passive and manual. See also the example section below for usage
# Line 263  specifically B<not> try to fix a problem Line 237  specifically B<not> try to fix a problem
237  alerts in case of a problem.  alerts in case of a problem.
238    
239  For use in clustered environments there is also a I<manual>  For use in clustered environments there is also a I<manual>
240  mode. In this mode, monit will enter I<active> mode only if a  mode. In this mode, monit will enter I<active> mode B<only> if a
241  service was started under monit's control, for example by:  service was started under monit's control, for example by:
242    
243    monit start sybase    monit start sybase
# Line 283  or Line 257  or
257    
258  monit will not monitor the service at all. This allows for having  monit will not monitor the service at all. This allows for having
259  services configured in monitrc and start it with monit only if it  services configured in monitrc and start it with monit only if it
260  should run. This can be used to build simple failsafe  should run. This can be used to build simple failsafe clusters.
261  clusters. For instance, using the I<heartbeat> system  You can read more about how you can setup a cluster with monit
262  (http://linux-ha.org/) to watch the health of nodes and in case  using the I<heartbeat> system in the examples sections below.
 of one machine failure start services on a secondary node. See  
 section bellow for more informations.  
   
   
   
 =head2 Monit with Heartbeat  
   
 The first thing to do is to install and configure I<heartbeat>  
 (http://www.linux-ha.org/downloads). It can be useful to have a  
 look at The Heartbeat Getting Started Guide at:  
 http://www.linux-ha.org/download/GettingStarted.html.  
   
 B<Starting up a Node>  
   
 This is the normal start sequence for a cluster-node. With this  
 sequence, there should be no error-case, which is not handled  
 either by heartbeat or monit. For example, if monit dies, initd  
 restarts it. If heatbeat dies, monit restarts it. If the node  
 dies, the heartbeat instance on the other node detects it and  
 restart the services there.  
   
  1) initd starts monit with group local  
  2) monit starts heartbeat in local group  
  3) heartbeat requests monit to start the node group  
  4) monit starts the node group  
   
 B<Monit F</etc/monitrc>>  
   
 This example describe a cluster with 2 nodes. Services running on  
 Node 1 are in the group I<node1> and Node 2 services are in the  
 group I<node2>.  
   
 The local group entries are mode I<active>, the node group  
 entries are mode I<manual> and controlled by heartbeat.  
   
   
  #  
  # local services on every host  
  #  
  check heartbeat with pidfile /var/run/heartbeat.pid  
        start program = "/etc/init.d/heartbeat start"  
        stop  program = "/etc/init.d/heartbeat start"  
        mode  active  
        alert foo@bar  
        group local  
   
  check postfix with pidfile /var/spool/postfix/pid/master.pid  
        start program = "/etc/init.d/postfix start"  
        stop program  = "/etc/init.d/postfix stop"  
        mode  active  
        alert foo@bar  
        group local  
   
  #  
  # node1 services  
  #  
   
  check apache with pidfile /var/apache/logs/httpd.pid  
        start program = "/etc/init.d/apache start"  
        stop program  = "/etc/init.d/apache stop"  
        depends named  
        alert foo@bar  
        mode  manual  
        group node1  
   
  check named with pidfile /var/tmp/named.pid  
        start program = "/etc/init.d/named start"  
        stop program  = "/etc/init.d/named stop"  
        alert foo@bar  
        mode  manual  
        group node1  
   
  #  
  # node2 services  
  #  
   
  check named-slave with pidfile /var/tmp/named-slave.pid  
        start program = "/etc/init.d/named-slave start"  
        stop program  = "/etc/init.d/named-slave stop"  
        mode  manual  
        alert foo@bar  
        group node2  
   
  check squid with pidfile /var/squid/logs/squid.pid  
        start program = "/etc/init.d/squid start"  
        stop program  = "/etc/init.d/squid stop"  
        depends named-slave  
        alert foo@bar  
        mode  manual  
        group node2  
   
   
 B<initd  F</etc/inittab>>  
   
 Monit is started on both nodes with initd. You will need to add  
 an entry in F</etc/inittab> to start monit with the same local  
 group heartbeat is member of.  
263    
  #/etc/inittab  
  mo:2345:respawn:/usr/local/bin/monit -i -d 10 -c /etc/monitrc -g local  
   
 B<heartbeat  F</etc/ha.d/haresources>>  
   
 When heartbeat starts, heartbeat look up the node entry and start  
 the script F</etc/init.d/monit-node1> or  
 F</etc/init.d/monit-node2>. The script calls monit to start the  
 specific group per node.  
   
  # /etc/ha.d/haresources  
  node1 IPaddr::172.16.100.1  monit-node1  
  node2 IPaddr::172.16.100.2  monit-node2  
   
   
 B<F</etc/init.d/monit-node1>>  
   
  #!/bin/bash  
  #  
  # sample script for starting/stopping all services on node1  
  #  
  prog="/usr/local/bin/monit -g node1"  
  start()  
  {  
        echo -n $"Starting $prog:"  
        $prog start  
        echo  
  }  
   
  stop()  
  {  
        echo -n $"Stopping $prog:"  
        $prog stop  
        echo  
  }  
   
  case "$1" in  
        start)  
             start;;  
        stop)  
             stop;;  
        *)  
             echo $"Usage: $0 {start|stop}"  
             RETVAL=1  
  esac  
  exit $RETVAL  
264    
265    
266  =head1 ALERT MESSAGES  =head1 ALERT MESSAGES
# Line 538  statement is as follows: Line 369  statement is as follows:
369        from: monit@localhost        from: monit@localhost
370     subject: apache $EVENT at $DATE     subject: apache $EVENT at $DATE
371     message: Monit restarted $PROGRAM at $DATE on $HOST.     message: Monit restarted $PROGRAM at $DATE on $HOST.
372       Your joke for today is:              Yours sincerely,
373       Things You Do Not Want Your System Administrator to Say:              monit
         * Ooops.  
         * Wow!! Look at this ...  
         * Hey!! The Suns don't do this.  
         * Terminated??!  
         * What software license?  
         * Well, it's doing something ...  
         * Wow! ... That seemed fast ...  
         * Where's the DIR command?  
         * Why is my "rm" taking so long?  
         * System coming down in 0 min ...  
374   }   }
375    
376  Where the keyword I<from:> is the email address monit should  Where the keyword I<from:> is the email address monit should
# Line 1143  the depend statement is simply: Line 964  the depend statement is simply:
964    
965  Where B<process> is a process entry name, for instance B<apache>.  Where B<process> is a process entry name, for instance B<apache>.
966  You may add more than one process name or use more than one  You may add more than one process name or use more than one
967  depend statement in a check entry.  depend statement in an entry.
968    
969  Processes specified in a I<depends> statement will be checked  Processes specified in a I<depends> statement will be checked
970  during stop/start operations. If a process is stopped it will  during stop/start operations. If a process is stopped it will
971  first stop any processes that depends on itself. Likewise, if a  stop any processes that depends on itself. Likewise, if a process
972  process is started, it will first stop any processes that depends  is started, it will first stop any processes that depends on
973  on itself and after it is started, start all depending processes  itself and after it is started, start all depending processes
974  again.  again.
975    
976  Consider the following common server setup:  Consider the following common server setup:
# Line 1198  A depend loop is for example; a->b and b Line 1019  A depend loop is for example; a->b and b
1019    
1020  When monit starts it will check for any such loops and complain  When monit starts it will check for any such loops and complain
1021  and exit if a loop was found. It will also exit with a complaint  and exit if a loop was found. It will also exit with a complaint
1022  if a depend statement was used that does not point to any  if a depend statement was used that does not point to any process
1023  processes in the controlfile.  in the controlfile.
1024    
1025  =back  =back
1026    
# Line 1255  much easier to read at a glance. The pun Line 1076  much easier to read at a glance. The pun
1076                   as the smtp-server for sending mail.                   as the smtp-server for sending mail.
1077   set mail-format Set a global mail format for all alert   set mail-format Set a global mail format for all alert
1078                   messages emitted by monit.                   messages emitted by monit.
1079     set pidfile     Explicit set the location of the monit lock
1080                     file. E.g. set pidfile /run/monit.pid.
1081   set httpd port  Activates monit http server at the given   set httpd port  Activates monit http server at the given
1082                   portnumber.                   portnumber.
1083   ssl enable      Enables ssl support for the httpd server.   ssl enable      Enables ssl support for the httpd server.
# Line 1325  much easier to read at a glance. The pun Line 1148  much easier to read at a glance. The pun
1148                   after the protocol keyword mentioned above.                   after the protocol keyword mentioned above.
1149                    - for http it can contain an URI and an                    - for http it can contain an URI and an
1150                      optional query string.                      optional query string.
1151                    - other protocols doesn't support this                    - other protocols does not support this
1152                      statement yet                      statement yet
1153   unix(socket)    Specifies a unix socket file and used like   unix(socket)    Specifies a unix socket file and used like
1154                   the port statement above to test a Unix                   the port statement above to test a Unix
# Line 1644  the other statements are optional and th Line 1467  the other statements are optional and th
1467  statements is not important.  statements is not important.
1468    
1469    
1470    =head2 Monit with Heartbeat
1471    
1472    The first thing to do is to install and configure I<heartbeat>
1473    http://www.linux-ha.org/download/. It can be useful to have a
1474    look at The Heartbeat Getting Started Guide at:
1475    http://www.linux-ha.org/GettingStarted.html
1476    
1477    B<Starting up a Node>
1478    
1479    This is the normal start sequence for a cluster-node. With this
1480    sequence, there should be no error-case, which is not handled
1481    either by heartbeat or monit. For example, if monit dies, initd
1482    restarts it. If heatbeat dies, monit restarts it. If the node
1483    dies, the heartbeat instance on the other node detects it and
1484    restart the services there.
1485    
1486     1) initd starts monit with group local
1487     2) monit starts heartbeat in local group
1488     3) heartbeat requests monit to start the node group
1489     4) monit starts the node group
1490    
1491    B<Monit F</etc/monitrc>>
1492    
1493    This example describe a cluster with 2 nodes. Services running on
1494    Node 1 are in the group I<node1> and Node 2 services are in the
1495    group I<node2>.
1496    
1497    The local group entries are mode I<active>, the node group
1498    entries are mode I<manual> and controlled by heartbeat.
1499    
1500    
1501     #
1502     # local services on every host
1503     #
1504     check heartbeat with pidfile /var/run/heartbeat.pid
1505           start program = "/etc/init.d/heartbeat start"
1506           stop  program = "/etc/init.d/heartbeat start"
1507           mode  active
1508           alert foo@bar
1509           group local
1510    
1511     check postfix with pidfile /var/spool/postfix/pid/master.pid
1512           start program = "/etc/init.d/postfix start"
1513           stop program  = "/etc/init.d/postfix stop"
1514           mode  active
1515           alert foo@bar
1516           group local
1517    
1518     #
1519     # node1 services
1520     #
1521    
1522     check apache with pidfile /var/apache/logs/httpd.pid
1523           start program = "/etc/init.d/apache start"
1524           stop program  = "/etc/init.d/apache stop"
1525           depends named
1526           alert foo@bar
1527           mode  manual
1528           group node1
1529    
1530     check named with pidfile /var/tmp/named.pid
1531           start program = "/etc/init.d/named start"
1532           stop program  = "/etc/init.d/named stop"
1533           alert foo@bar
1534           mode  manual
1535           group node1
1536    
1537     #
1538     # node2 services
1539     #
1540    
1541     check named-slave with pidfile /var/tmp/named-slave.pid
1542           start program = "/etc/init.d/named-slave start"
1543           stop program  = "/etc/init.d/named-slave stop"
1544           mode  manual
1545           alert foo@bar
1546           group node2
1547    
1548     check squid with pidfile /var/squid/logs/squid.pid
1549           start program = "/etc/init.d/squid start"
1550           stop program  = "/etc/init.d/squid stop"
1551           depends named-slave
1552           alert foo@bar
1553           mode  manual
1554           group node2
1555    
1556    
1557    B<initd  F</etc/inittab>>
1558    
1559    Monit is started on both nodes with initd. You will need to add
1560    an entry in F</etc/inittab> to start monit with the same local
1561    group heartbeat is member of.
1562    
1563     #/etc/inittab
1564     mo:2345:respawn:/usr/local/bin/monit -i -d 10 -c /etc/monitrc -g local
1565    
1566    B<heartbeat  F</etc/ha.d/haresources>>
1567    
1568    When heartbeat starts, heartbeat look up the node entry and start
1569    the script F</etc/init.d/monit-node1> or
1570    F</etc/init.d/monit-node2>. The script calls monit to start the
1571    specific group per node.
1572    
1573     # /etc/ha.d/haresources
1574     node1 IPaddr::172.16.100.1  monit-node1
1575     node2 IPaddr::172.16.100.2  monit-node2
1576    
1577    
1578    B<F</etc/init.d/monit-node1>>
1579    
1580     #!/bin/bash
1581     #
1582     # sample script for starting/stopping all services on node1
1583     #
1584     prog="/usr/local/bin/monit -g node1"
1585     start()
1586     {
1587           echo -n $"Starting $prog:"
1588           $prog start
1589           echo
1590     }
1591    
1592     stop()
1593     {
1594           echo -n $"Stopping $prog:"
1595           $prog stop
1596           echo
1597     }
1598    
1599     case "$1" in
1600           start)
1601                start;;
1602           stop)
1603                stop;;
1604           *)
1605                echo $"Usage: $0 {start|stop}"
1606                RETVAL=1
1607     esac
1608     exit $RETVAL
1609    
1610    
1611  =head1 FILES  =head1 FILES
1612    
1613  F<~/.monitrc>    F<~/.monitrc>  

Legend:
Removed from v.1.61  
changed lines
  Added in v.1.62

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26