Age | Commit message (Collapse) | Author |
|
Bug 6845, 6821
Need to avoid using /sys/class/net/ethX/device/irq (or /sys/bus/pci/.../irq)
because these don't handle MSI or multiqueue. This also resolves issues
with Vmware/ESX.
(cherry picked from commit 082b1f52b5d18a7d6526c6e92290a862e63ddace)
|
|
Bug 6845
Warn (and ignore) attempt to assign IRQ directly on multiqueuej
NIC.
(cherry picked from commit 6938b8bce001cca2d98d6b277d134c9e8e405271)
|
|
Bug 6784
No point in trying to force affinity if device is offline.
(cherry picked from commit b541f5ffa7bf1c6951e01ae4814e0cd38adc42d5)
|
|
Adapt to irq naming convention in 2.6.37 kernel for vmxnet3 driver.
(cherry picked from commit 018c1ac6286ad40d7fff612573a7efffafe0d480)
|
|
Bug 6784
Disabled device has no IRQ, so don't change it.
(cherry picked from commit 78d24daefeab6e91f282044abb8930678434ea8c)
|
|
The initial CPU selection function needs to take hyperthreading
into account.
(cherry picked from commit c1eb2494559fb0f6ee2beecaedb2a415ff096056)
|
|
|
|
The problem was due to incorrect initialization of the $q and $cpu
variables. Their initializations were reversed.
|
|
Need to handle virtual devices with no IRQ, and older processors
without concept of multi-core.
|
|
Replace old script with new cleaner script that handles both
IRQ affinity and Receive Packet Steering. Instead of two scripts
(one for mask and one for auto), do it all with one script.
Receive Packet Steering is supported in two ways.
If 'auto' is used, then both threads on HT system will be
used for receive processing.
If explicit mask is given, then two masks can be used to set
both IRQ cpus and RPS cpus.
|