давно такое не встречал :)
смотрим.
zabbix-proxy# netstat -m
50/730/780 mbufs in use (current/cache/total)
32/230/262/524288 mbuf clusters in use (current/cache/total/max)
32/224 mbuf+clusters out of packet secondary zone in use (current/cache)
0/128/128/8576 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/4288 9k jumbo clusters in use (current/cache/total/max)
0/0/0/2144 16k jumbo clusters in use (current/cache/total/max)
76K/1154K/1231K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
zabbix-proxy#
zabbix-proxy# netstat -id
Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll Drop
de0 1500 <Link#1> 00:15:5d:15:02:8c 26863978 0 0 26179780 0 0 906
de0 1500 91.001.002.0/ zabbix-proxy 24461808 - - 23811747 - - -
de0 1500 172.19.1.48/2 172.19.1.53 103 - - 2356689 - - -
ipfw0 65536 <Link#2> 0 0 0 0 0 0 0
lo0 16384 <Link#3> 140 0 0 140 0 0 0
lo0 16384 fe80:3::1 fe80:3::1 0 - - 0 - - -
lo0 16384 localhost ::1 0 - - 0 - - -
lo0 16384 your-net localhost 140 - - 140 - - -
zabbix-proxy#
меняем ситуацию в
sysctl.conf.
net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
kern.ipc.maxsockbuf=1048576
kern.ipc.nmbclusters=524288
zabbix-proxy# vmstat -z | grep mbuf
ITEM SIZE LIMIT USED FREE REQUESTS FAILURES
mbuf_packet: 256, 0, 32, 224, 27256724, 0
mbuf: 256, 0, 14, 510, 70586157, 0
mbuf_cluster: 2048, 524288, 256, 6, 256, 0
mbuf_jumbo_page: 4096, 8576, 0, 128, 1076992, 0
mbuf_jumbo_9k: 9216, 4288, 0, 0, 0, 0
mbuf_jumbo_16k: 16384, 2144, 0, 0, 0, 0
mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0
при этом уже добавив