On Mon, Apr 30, 2018 at 12:12:24PM -0400, Laine Stump wrote:
> First - this is a *great* idea, and we should do something similar to
> the network driver (keep track of all the connections to each network so
> they can be re-connected when a network is stopped/started).
Yes, this nwfilter tracking is a pre-requisite for similar work that
we need todo in network driver. eg, when an unprivileged libvirtd
asks privileged network driver for a TAP device, the network driver
will also need to talk to the nwfilter driver to setup filtering.
We'll need something similar in the node dev driver too, before we
can do the network driver, because of the horrible <interface type=hostdev>
stuff we need to support.
> Second - we need to think about the situation where the nwfilter process
> is stopped during a time when the hypervisor driver shuts down a guest.
> If nwfilter keeps track of the current set of active filters, serializes
> them to disk, and then rereads/reloads that list of filters when it's
> restarted, then this sequence is possible:
>
> 1) nwfilter daemon is stopped
> 2) qemu destroys a guest
> 3) nwfilter daemon is restarted
>
> Since we wouldn't want qemu to just hang forever during guest shutdown
> if one of the subordinate drivers was unavailable, we could end up with
> the filter rules for the defunct guest (which were saved off to disk by
> nwfilter) being reloaded in step 3, and there would be no way to clear
> them out short of rebooting the host.
>
> I can think of two ways to solve this:
>
> 1) qemu (or whichever hypervisor) keeps a queue of pending requests to
> nwfilter, and re-sends them when the nwfilter daemon once again becomes
> available and is reconnected. (But then what happens if qemu is
> restarted while there are pending requests to nwfilter? We would need to
> keep all the pending requests serialized on disk too? :-/)
>
> 2) each time qemu reconnects to the nwfilter driver, it issues a "reset"
> command, which through some form of magic hand waving both reloads the
> rules for all guests that are still active, and deletes the rules for
> those that aren't.
>
> (2) seems like it might be simpler and more robust - perhaps all that
> would be needed would be for nwfilter to keep the name/uuid of each
> guest that has active rules (which you're already doing in this series,
> although we also need something to identify the hypervisor driver that
> requested the rules (if all connections are local, then just a driver
> name and pid would probably be sufficient), and have a
> virNWFilterReset() (or some other better name) API that accepts a list
> of active uuids. each time qemu connects to nwfilter, it could send this
> list, and nwfilter would delete all rules associated with that
> hypervisor and a name/uuid not in the list; as a bonus it could also
> reload the rules for guests that *are* still active) (similar to the way
> the network driver reloads the iptables rules for all the networks when
> it is restarted).
Key thing to bear in mind here is that whatever solution is taken
actually needs to be applicable to network devices, and host devices
too. So I think want to avoid overloading semantics that are specific
to the task of firewalling. IOW, I would not want to rebuild all
filters just because the hypervisor driver restarted.
IMHO is the nwfilter drivers job to expose a mechanism - create filter
binding, list filter bindings, and delete filter bindings. It is then
upto the caller to decide when these should be called, and ensure that
they are always called eventually, even if a service is temporarily
offline. This all says option (1) to me, even if it is requiring more
explicit state tracking in the virt drivers, compared to our ability
to be lazy right now.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list