mtr


[root@ns3 ~]# mtr -r -c50 11.112.234.236
HOST: ns3.host96.ru               Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. 192.168.23.1                  0.0%    50    0.3   1.1   0.3   7.1   1.7
  2. 10.220.1.20                   0.0%    50    0.5   1.4   0.4  18.7   2.9
  3. 85-2.ru                       0.0%    50    3.4   2.1   0.5  23.8   3.8
  4. 192.168.99.13                 0.0%    50    0.7   3.2   0.5  17.7   5.0
  5. kt12-1-gw.spb.------.ru       0.0%    50    0.8   6.8   0.6 190.2  28.1
  6. m9-3-gw.msk.******.ru         0.0%    50   10.3  14.3  10.2 115.7  15.0
  7. gi-1-14.r2-m9.*********.net   0.0%    50   13.6  12.5  10.1  29.8   4.6
  8. u**********x.net             24.0%    50   41.9  46.5  40.8 198.0  25.4
  9. 82.*.*.181                    0.0%    50   41.2  43.0  40.6  72.9   4.9
 10. 172.18.8.10                   4.0%    50   42.1  48.1  41.0 150.3  16.8
 11. 11.112.234.236                2.0%    50   42.2  43.4  41.3  58.7   3.7
[root@ns3 ~]#

ping: sendto: No buffer space available

давно такое не встречал :)

смотрим.
zabbix-proxy# netstat -m
50/730/780 mbufs in use (current/cache/total)
32/230/262/524288 mbuf clusters in use (current/cache/total/max)
32/224 mbuf+clusters out of packet secondary zone in use (current/cache)
0/128/128/8576 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/4288 9k jumbo clusters in use (current/cache/total/max)
0/0/0/2144 16k jumbo clusters in use (current/cache/total/max)
76K/1154K/1231K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
zabbix-proxy#



zabbix-proxy# netstat -id
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll Drop
de0    1500 <Link#1>      00:15:5d:15:02:8c 26863978     0     0 26179780     0     0  906
de0    1500 91.001.002.0/ zabbix-proxy      24461808     -     - 23811747     -     -    -
de0    1500 172.19.1.48/2 172.19.1.53            103     -     -  2356689     -     -    -
ipfw0 65536 <Link#2>                               0     0     0        0     0     0    0
lo0   16384 <Link#3>                             140     0     0      140     0     0    0
lo0   16384 fe80:3::1     fe80:3::1                0     -     -        0     -     -    -
lo0   16384 localhost     ::1                      0     -     -        0     -     -    -
lo0   16384 your-net      localhost              140     -     -      140     -     -    -
zabbix-proxy#


меняем ситуацию в sysctl.conf.

net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072

net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216

net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288

kern.ipc.maxsockbuf=1048576
kern.ipc.nmbclusters=524288


zabbix-proxy# vmstat -z | grep mbuf
ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES
mbuf_packet:              256,        0,       32,      224, 27256724,        0
mbuf:                     256,        0,       14,      510, 70586157,        0
mbuf_cluster:            2048,   524288,      256,        6,      256,        0
mbuf_jumbo_page:         4096,     8576,        0,      128,  1076992,        0
mbuf_jumbo_9k:           9216,     4288,        0,        0,        0,        0
mbuf_jumbo_16k:         16384,     2144,        0,        0,        0,        0
mbuf_ext_refcnt:            4,        0,        0,        0,        0,        0


при этом уже добавив

mrtg ping probe VS SmokePing

mrtg-ping-probe
/usr/ports/net-mgmt/mrtg-ping-probe/
mrtg-ping-probe is a ping probe for MRTG 2.x.  It is used to monitor
the round trip time and packet loss to networked devices.  MRTG uses
its output to generate graphs visualizing minimum and maximum round
trip times or packet loss.


SmokePing
/usr/ports/net-mgmt/smokeping
SmokePing is a latency logging and graphing system. It consists of a
daemon process which organizes the latency measurements and a CGI
which presents the graphs.