Age | Commit message (Collapse) | Author |
|
|
|
For routing and other applications it is helpful to provide some
mechanism to reserve some set of CPU's and not assign interface
IRQ's to them.
Uses environment variable VYATTA_IRQAFFINITY_BANNED_CPUS
as mechanism similar to irqbalance(8).
|
|
Multiqueue setup was broken ixgbe because of s/assign/assing/
|
|
Bug 7062
The irq's on Netxen NIC are named:
eth0[0] eth0[1] eth0[2] ...
This confuses the auto IRQ affinity script.
|
|
Bug 7032 (reprise)
Since there are various forms of multi-queue naming, it is better
to just go with the simplest pattern which is to take all the irq's
of form ethX-... and sort them.
|
|
Bug 7032
Fix matchinging of irq's named 'eth0-TxRx-0'. And change the code to
handle any form of IRQ naming of multiqueue that is eth0-xxx-0.
|
|
|
|
The mislabeled commit ddce08161907797fe914ba609b362d812e23fc8a
Fix wrong name in get_irq_affinity
Was some code to handle Broadcom device IRQ naming convention
that was untested. The part that built regex was incorrectly
expanding a string with regex characters.
|
|
|
|
Bug 6845, 6821
Need to avoid using /sys/class/net/ethX/device/irq (or /sys/bus/pci/.../irq)
because these don't handle MSI or multiqueue. This also resolves issues
with Vmware/ESX.
|
|
Bug 6845
Warn (and ignore) attempt to assign IRQ directly on multiqueuej
NIC.
|
|
Bug 6784
No point in trying to force affinity if device is offline.
|
|
Adapt to irq naming convention in 2.6.37 kernel for vmxnet3 driver.
|
|
Bug 6784
Disabled device has no IRQ, so don't change it.
|
|
The initial CPU selection function needs to take hyperthreading
into account.
|
|
(cherry picked from commit a943568e64bca73bb2951e968d0cc752d72874ab)
|
|
The problem was due to incorrect initialization of the $q and $cpu
variables. Their initializations were reversed.
|
|
Need to handle virtual devices with no IRQ, and older processors
without concept of multi-core.
|
|
Replace old script with new cleaner script that handles both
IRQ affinity and Receive Packet Steering. Instead of two scripts
(one for mask and one for auto), do it all with one script.
Receive Packet Steering is supported in two ways.
If 'auto' is used, then both threads on HT system will be
used for receive processing.
If explicit mask is given, then two masks can be used to set
both IRQ cpus and RPS cpus.
|