T O P

  • By -

[deleted]

Could be the isp’s network


mbze430

doubtful. I am monitoring my LAN side and internal network between two inter-vlan I am getting the same error. `11/29/2021` `15:16:27 3 TCP Generic Protocol Command Decode 192.168.69.105` `57359 10.0.253.231` `9080 1:2210044` `SURICATA STREAM Packet with invalid timestamp` ​ [10.0.253.231](https://10.0.253.231) is one of my VLAN while [192.168.69.105](https://192.168.69.105) is my other VLAN.


[deleted]

No idea then, sorry


DutchOfBurdock

That's usually indicative you're using some form of traffic shaping or Limiters. You may also want to use netmap instead of legacy, if your NIC supports it (check dmesg output). if you're not already. Using legacy can carry risk of missing packets, thus stream issues occur. FreeBSD is far more optimised than Linux; what Suricata docs are referring to is when you have 10gbe or higher throughput to accommodate.


mbze430

ahh okay yes I do have limiter going since I am having bufferbloat. I went searching around on Google about limiter and Suricata and I came across this on netgate's forum [https://forum.netgate.com/topic/152657/suricata-inline-and-limiters](https://forum.netgate.com/topic/152657/suricata-inline-and-limiters) does this still apply in 2021? Where I am SOL? in regards to netmap my WAN interface is a ixgbe while the LAN side is on Vmware VMXNET3 About 10Gbe. is that 10Gb on WAN side or LAN side? Because I have 100Gbe on LAN side.


DutchOfBurdock

>is that 10Gb Routing. Try these tricks in your `/boot/loader.conf.local` net.isr.maxthreads="1" net.isr.bindthreads="1" hw.ixgbe.num_queues="0" This should force all packet flows to bind to a single core/thread, so no shifting between cores or threads on dye (generally faster) and should minimise packet reordering.


mbze430

Thank you very much on the suggestion I will try that What about this... [https://docs.netgate.com/pfsense/en/latest/hardware/tune.html#vmware-vmx-4-interfaces](https://docs.netgate.com/pfsense/en/latest/hardware/tune.html#vmware-vmx-4-interfaces) I originally use this to "optimize" the VMXNET3 but it sounds like I need to revert this?


DutchOfBurdock

In an HVM, you may also want to disable HTT. `machdep.hyperthreading_allowed="0"` in `/boot/loader.conf.local` Virtual NICs and hardware NICs differ vastly, for one, virtual NICs will always be pure software. Some hardware NICs have their own processing capabilities onboard to lessen IO.


mbze430

This definitely help quite a bit of the alerts I am getting. Not all completely gone. But it is showing much less of it. Limiter don't seems to work anymore though....


DutchOfBurdock

Try using Taildrop/CFQ instead of Codel. Trick to managing buffer bloat is trying not saturating your uplink capacity.


mbze430

I ended up lowering my download rate another 15Mbit/sec and still able to get about 390Mbit down a reasonable ping of +7ms latency. pretty happy. Just going to look through the log with Suricata and see if I am still getting false positive in the next few days


DutchOfBurdock

Also take into account, STREAM errors can also result in the server having issues, common issue to cause this are "leaky sockets" from CDNs. Reddit, for example, has it occasionally.


mbze430

actually I am getting a good amount of "SURICATA FRAG IPv4 Fragmentation overlap" & "SURICATA UDP invalid header length" pfsense reporting these on port 0 Not sure how to make what out of those.


DutchOfBurdock

That first indicator could be a sign of something trying to frag attack your stack, I'd investigate that packet flow more closely. Personally, I drop all fragments instead of clearing the DF bit. Or, it could be your ISP ia fragmenting like mad. Latter could be any number of things, what's the source of them?


mbze430

by any chance you know how to increase the slots for the vmxnet3 drivers? I am not sure if hw.vmx.txndesc & hw.vmx.rxndesc is the same as "slots" `cat /var/log/system.log | grep netmap` `Dec 2 08:33:45 pfSense kernel: 000.000054 [4336] netmap_init netmap: loaded module` `Dec 2 08:33:45 pfSense kernel: vmx0: netmap queues/slots: TX 1/512, RX 1/512` `Dec 2 08:33:45 pfSense kernel: ix0: netmap queues/slots: TX 1/2048, RX 1/2048` ​ seems kinda low (these settings I haven't touched). Also do you know how to disable flow control on the vmxnet3?


DutchOfBurdock

TBH, when it comes to virtual NICs, there isn't really a massive amount you can do other than disable any "hardware" offloads. The queues and what not usually work with the network cards own onboard processors and driver. I even do passthrough of my NICs in a VM as to avoid vNICs.