[PATCH] binderfs: rework superblock destruction

Christian Brauner posted 1 patch 3 years, 7 months ago
There is a newer version of this series
drivers/android/binderfs.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
[PATCH] binderfs: rework superblock destruction
Posted by Christian Brauner 3 years, 7 months ago
From: Al Viro <viro@zeniv.linux.org.uk>

So far we relied on
.put_super = binderfs_put_super()
to destroy info we stashed in sb->s_fs_info. But the current implementation of
binderfs_fill_super() leads to a memory leak in the rare circumstance that
d_make_root() fails because ->put_super() is only called when sb->s_root is
initialized. Fix this by removing ->put_super() and simply do all that work in
binderfs_kill_super().

Reported-by: Dongliang Mu <mudongliangabcd@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Christian Brauner (Microsoft) <brauner@kernel.org>
---
I should note that I didn't have time to test this.
---
 drivers/android/binderfs.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/drivers/android/binderfs.c b/drivers/android/binderfs.c
index 588d753a7a19..6d896f75aab6 100644
--- a/drivers/android/binderfs.c
+++ b/drivers/android/binderfs.c
@@ -340,22 +340,10 @@ static int binderfs_show_options(struct seq_file *seq, struct dentry *root)
 	return 0;
 }
 
-static void binderfs_put_super(struct super_block *sb)
-{
-	struct binderfs_info *info = sb->s_fs_info;
-
-	if (info && info->ipc_ns)
-		put_ipc_ns(info->ipc_ns);
-
-	kfree(info);
-	sb->s_fs_info = NULL;
-}
-
 static const struct super_operations binderfs_super_ops = {
 	.evict_inode    = binderfs_evict_inode,
 	.show_options	= binderfs_show_options,
 	.statfs         = simple_statfs,
-	.put_super	= binderfs_put_super,
 };
 
 static inline bool is_binderfs_control_device(const struct dentry *dentry)
@@ -785,11 +773,22 @@ static int binderfs_init_fs_context(struct fs_context *fc)
 	return 0;
 }
 
+static void binderfs_kill_super(struct super_block *sb)
+{
+	struct binderfs_info *info = sb->s_fs_info;
+
+	if (info && info->ipc_ns)
+		put_ipc_ns(info->ipc_ns);
+
+	kfree(info);
+	kill_litter_super(sb);
+}
+
 static struct file_system_type binder_fs_type = {
 	.name			= "binder",
 	.init_fs_context	= binderfs_init_fs_context,
 	.parameters		= binderfs_fs_parameters,
-	.kill_sb		= kill_litter_super,
+	.kill_sb		= binderfs_kill_super,
 	.fs_flags		= FS_USERNS_MOUNT,
 };
 
-- 
2.34.1
Re: [PATCH] binderfs: rework superblock destruction
Posted by Al Viro 3 years, 7 months ago
On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:

> +static void binderfs_kill_super(struct super_block *sb)
> +{
> +	struct binderfs_info *info = sb->s_fs_info;
> +
> +	if (info && info->ipc_ns)
> +		put_ipc_ns(info->ipc_ns);
> +
> +	kfree(info);
> +	kill_litter_super(sb);
> +}

Other way round, please - shut the superblock down, *then*
free the objects it'd been using.  IOW,

	struct binderfs_info *info = sb->s_fs_info;

	kill_litter_super(sb);

	if (info && info->ipc_ns)
		put_ipc_ns(info->ipc_ns);

	kfree(info);
Re: [PATCH] binderfs: rework superblock destruction
Posted by Christian Brauner 3 years, 7 months ago
On Wed, Aug 17, 2022 at 02:59:02PM +0100, Al Viro wrote:
> On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:
> 
> > +static void binderfs_kill_super(struct super_block *sb)
> > +{
> > +	struct binderfs_info *info = sb->s_fs_info;
> > +
> > +	if (info && info->ipc_ns)
> > +		put_ipc_ns(info->ipc_ns);
> > +
> > +	kfree(info);
> > +	kill_litter_super(sb);
> > +}
> 
> Other way round, please - shut the superblock down, *then*
> free the objects it'd been using.  IOW,

I wondered about that but a lot of places do it the other way around.
So maybe the expected order should be documented somewhere.
Re: [PATCH] binderfs: rework superblock destruction
Posted by Al Viro 3 years, 7 months ago
On Wed, Aug 17, 2022 at 04:01:49PM +0200, Christian Brauner wrote:
> On Wed, Aug 17, 2022 at 02:59:02PM +0100, Al Viro wrote:
> > On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:
> > 
> > > +static void binderfs_kill_super(struct super_block *sb)
> > > +{
> > > +	struct binderfs_info *info = sb->s_fs_info;
> > > +
> > > +	if (info && info->ipc_ns)
> > > +		put_ipc_ns(info->ipc_ns);
> > > +
> > > +	kfree(info);
> > > +	kill_litter_super(sb);
> > > +}
> > 
> > Other way round, please - shut the superblock down, *then*
> > free the objects it'd been using.  IOW,
> 
> I wondered about that but a lot of places do it the other way around.
> So maybe the expected order should be documented somewhere.

???

"If you are holding internal references to dentries/inodes/etc., drop them
first; if you are going to free something that is used by filesystem
methods, don't do that before the filesystem is shut down"

That's just common sense...  Which filesystems are doing that "the other
way around"?
Re: [PATCH] binderfs: rework superblock destruction
Posted by Christian Brauner 3 years, 7 months ago
On Wed, Aug 17, 2022 at 03:19:13PM +0100, Al Viro wrote:
> On Wed, Aug 17, 2022 at 04:01:49PM +0200, Christian Brauner wrote:
> > On Wed, Aug 17, 2022 at 02:59:02PM +0100, Al Viro wrote:
> > > On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:
> > > 
> > > > +static void binderfs_kill_super(struct super_block *sb)
> > > > +{
> > > > +	struct binderfs_info *info = sb->s_fs_info;
> > > > +
> > > > +	if (info && info->ipc_ns)
> > > > +		put_ipc_ns(info->ipc_ns);
> > > > +
> > > > +	kfree(info);
> > > > +	kill_litter_super(sb);
> > > > +}
> > > 
> > > Other way round, please - shut the superblock down, *then*
> > > free the objects it'd been using.  IOW,
> > 
> > I wondered about that but a lot of places do it the other way around.
> > So maybe the expected order should be documented somewhere.
> 
> ???
> 
> "If you are holding internal references to dentries/inodes/etc., drop them
> first; if you are going to free something that is used by filesystem
> methods, don't do that before the filesystem is shut down"
> 
> That's just common sense...  Which filesystems are doing that "the other
> way around"?

I think at least these below. Completely untested...

Signed-off-by: Christian Brauner (Microsoft) <brauner@kernel.org>
---
 arch/s390/hypfs/inode.c      |  3 +--
 fs/devpts/inode.c            |  2 +-
 fs/ramfs/inode.c             |  4 +++-
 security/selinux/selinuxfs.c | 12 ++++++------
 4 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
index 5c97f48cea91..d7d275ef132f 100644
--- a/arch/s390/hypfs/inode.c
+++ b/arch/s390/hypfs/inode.c
@@ -329,9 +329,8 @@ static void hypfs_kill_super(struct super_block *sb)
 		hypfs_delete_tree(sb->s_root);
 	if (sb_info && sb_info->update_file)
 		hypfs_remove(sb_info->update_file);
-	kfree(sb->s_fs_info);
-	sb->s_fs_info = NULL;
 	kill_litter_super(sb);
+	kfree(sb->s_fs_info);
 }
 
 static struct dentry *hypfs_create_file(struct dentry *parent, const char *name,
diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
index 4f25015aa534..78a9095e1748 100644
--- a/fs/devpts/inode.c
+++ b/fs/devpts/inode.c
@@ -509,10 +509,10 @@ static void devpts_kill_sb(struct super_block *sb)
 {
 	struct pts_fs_info *fsi = DEVPTS_SB(sb);
 
+	kill_litter_super(sb);
 	if (fsi)
 		ida_destroy(&fsi->allocated_ptys);
 	kfree(fsi);
-	kill_litter_super(sb);
 }
 
 static struct file_system_type devpts_fs_type = {
diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
index bc66d0173e33..bff49294e037 100644
--- a/fs/ramfs/inode.c
+++ b/fs/ramfs/inode.c
@@ -280,8 +280,10 @@ int ramfs_init_fs_context(struct fs_context *fc)
 
 static void ramfs_kill_sb(struct super_block *sb)
 {
-	kfree(sb->s_fs_info);
+	struct ramfs_fs_info *fsi = sb->s_fs_info;
+
 	kill_litter_super(sb);
+	kfree(fsi);
 }
 
 static struct file_system_type ramfs_fs_type = {
diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
index 8fcdd494af27..fb1dae422d93 100644
--- a/security/selinux/selinuxfs.c
+++ b/security/selinux/selinuxfs.c
@@ -96,9 +96,8 @@ static int selinux_fs_info_create(struct super_block *sb)
 	return 0;
 }
 
-static void selinux_fs_info_free(struct super_block *sb)
+static void selinux_fs_info_free(struct selinux_fs_info *fsi)
 {
-	struct selinux_fs_info *fsi = sb->s_fs_info;
 	int i;
 
 	if (fsi) {
@@ -107,8 +106,7 @@ static void selinux_fs_info_free(struct super_block *sb)
 		kfree(fsi->bool_pending_names);
 		kfree(fsi->bool_pending_values);
 	}
-	kfree(sb->s_fs_info);
-	sb->s_fs_info = NULL;
+	kfree(fsi);
 }
 
 #define SEL_INITCON_INO_OFFSET		0x01000000
@@ -2180,7 +2178,7 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
 	pr_err("SELinux: %s:  failed while creating inodes\n",
 		__func__);
 
-	selinux_fs_info_free(sb);
+	selinux_fs_info_free(fsi);
 
 	return ret;
 }
@@ -2202,8 +2200,10 @@ static int sel_init_fs_context(struct fs_context *fc)
 
 static void sel_kill_sb(struct super_block *sb)
 {
-	selinux_fs_info_free(sb);
+	struct selinux_fs_info *fsi = sb->s_fs_info;
+
 	kill_litter_super(sb);
+	selinux_fs_info_free(fsi);
 }
 
 static struct file_system_type sel_fs_type = {
-- 
2.34.1
Re: [PATCH] binderfs: rework superblock destruction
Posted by Al Viro 3 years, 7 months ago
On Wed, Aug 17, 2022 at 04:51:44PM +0200, Christian Brauner wrote:

> diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
> index 5c97f48cea91..d7d275ef132f 100644
> --- a/arch/s390/hypfs/inode.c
> +++ b/arch/s390/hypfs/inode.c
> @@ -329,9 +329,8 @@ static void hypfs_kill_super(struct super_block *sb)
>  		hypfs_delete_tree(sb->s_root);
>  	if (sb_info && sb_info->update_file)
>  		hypfs_remove(sb_info->update_file);
> -	kfree(sb->s_fs_info);
> -	sb->s_fs_info = NULL;
>  	kill_litter_super(sb);
> +	kfree(sb->s_fs_info);

UAF, that - *sb gets freed by the time you try to fetch sb->s_fs_info...
Fetch the pointer first, then destroy the object you've fetched it
from, then free what it points to...

> diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
> index 4f25015aa534..78a9095e1748 100644
> --- a/fs/devpts/inode.c
> +++ b/fs/devpts/inode.c
> @@ -509,10 +509,10 @@ static void devpts_kill_sb(struct super_block *sb)
>  {
>  	struct pts_fs_info *fsi = DEVPTS_SB(sb);
>  
> +	kill_litter_super(sb);
>  	if (fsi)
>  		ida_destroy(&fsi->allocated_ptys);
>  	kfree(fsi);
> -	kill_litter_super(sb);
>  }
>  

That one's fine.

>  static struct file_system_type devpts_fs_type = {
> diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
> index bc66d0173e33..bff49294e037 100644
> --- a/fs/ramfs/inode.c
> +++ b/fs/ramfs/inode.c
> @@ -280,8 +280,10 @@ int ramfs_init_fs_context(struct fs_context *fc)
>  
>  static void ramfs_kill_sb(struct super_block *sb)
>  {
> -	kfree(sb->s_fs_info);
> +	struct ramfs_fs_info *fsi = sb->s_fs_info;
> +
>  	kill_litter_super(sb);
> +	kfree(fsi);
>  }

Cosmetical, really - see another posting in the same thread.

>  static struct file_system_type ramfs_fs_type = 
> diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
> index 8fcdd494af27..fb1dae422d93 100644
> --- a/security/selinux/selinuxfs.c
> +++ b/security/selinux/selinuxfs.c
> @@ -96,9 +96,8 @@ static int selinux_fs_info_create(struct super_block *sb)
>  	return 0;
>  }
>  
> -static void selinux_fs_info_free(struct super_block *sb)
> +static void selinux_fs_info_free(struct selinux_fs_info *fsi)
>  {
> -	struct selinux_fs_info *fsi = sb->s_fs_info;
>  	int i;
>  
>  	if (fsi) {
> @@ -107,8 +106,7 @@ static void selinux_fs_info_free(struct super_block *sb)
>  		kfree(fsi->bool_pending_names);
>  		kfree(fsi->bool_pending_values);
>  	}
> -	kfree(sb->s_fs_info);
> -	sb->s_fs_info = NULL;
> +	kfree(fsi);
>  }
>  
>  #define SEL_INITCON_INO_OFFSET		0x01000000
> @@ -2180,7 +2178,7 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
>  	pr_err("SELinux: %s:  failed while creating inodes\n",
>  		__func__);
>  
> -	selinux_fs_info_free(sb);
> +	selinux_fs_info_free(fsi);
>  
>  	return ret;
>  }
> @@ -2202,8 +2200,10 @@ static int sel_init_fs_context(struct fs_context *fc)
>  
>  static void sel_kill_sb(struct super_block *sb)
>  {
> -	selinux_fs_info_free(sb);
> +	struct selinux_fs_info *fsi = sb->s_fs_info;
> +
>  	kill_litter_super(sb);
> +	selinux_fs_info_free(fsi);
>  }

A real bug, but an incomplete fix - you've just gotten yourself a double-free;
failure in sel_fill_super() has no need to do selinux_fs_info_free() now.
Re: [PATCH] binderfs: rework superblock destruction
Posted by Christian Brauner 3 years, 7 months ago
On Wed, Aug 17, 2022 at 04:21:11PM +0100, Al Viro wrote:
> On Wed, Aug 17, 2022 at 04:51:44PM +0200, Christian Brauner wrote:
> 
> > diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
> > index 5c97f48cea91..d7d275ef132f 100644
> > --- a/arch/s390/hypfs/inode.c
> > +++ b/arch/s390/hypfs/inode.c
> > @@ -329,9 +329,8 @@ static void hypfs_kill_super(struct super_block *sb)
> >  		hypfs_delete_tree(sb->s_root);
> >  	if (sb_info && sb_info->update_file)
> >  		hypfs_remove(sb_info->update_file);
> > -	kfree(sb->s_fs_info);
> > -	sb->s_fs_info = NULL;
> >  	kill_litter_super(sb);
> > +	kfree(sb->s_fs_info);
> 
> UAF, that - *sb gets freed by the time you try to fetch sb->s_fs_info...
> Fetch the pointer first, then destroy the object you've fetched it
> from, then free what it points to...

Please note the "completely untested" in the draft... ;)

If you want me to, I can turn this into something serious to review.

> 
> > diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
> > index 4f25015aa534..78a9095e1748 100644
> > --- a/fs/devpts/inode.c
> > +++ b/fs/devpts/inode.c
> > @@ -509,10 +509,10 @@ static void devpts_kill_sb(struct super_block *sb)
> >  {
> >  	struct pts_fs_info *fsi = DEVPTS_SB(sb);
> >  
> > +	kill_litter_super(sb);
> >  	if (fsi)
> >  		ida_destroy(&fsi->allocated_ptys);
> >  	kfree(fsi);
> > -	kill_litter_super(sb);
> >  }
> >  
> 
> That one's fine.
> 
> >  static struct file_system_type devpts_fs_type = {
> > diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
> > index bc66d0173e33..bff49294e037 100644
> > --- a/fs/ramfs/inode.c
> > +++ b/fs/ramfs/inode.c
> > @@ -280,8 +280,10 @@ int ramfs_init_fs_context(struct fs_context *fc)
> >  
> >  static void ramfs_kill_sb(struct super_block *sb)
> >  {
> > -	kfree(sb->s_fs_info);
> > +	struct ramfs_fs_info *fsi = sb->s_fs_info;
> > +
> >  	kill_litter_super(sb);
> > +	kfree(fsi);
> >  }
> 
> Cosmetical, really - see another posting in the same thread.
> 
> >  static struct file_system_type ramfs_fs_type = 
> > diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
> > index 8fcdd494af27..fb1dae422d93 100644
> > --- a/security/selinux/selinuxfs.c
> > +++ b/security/selinux/selinuxfs.c
> > @@ -96,9 +96,8 @@ static int selinux_fs_info_create(struct super_block *sb)
> >  	return 0;
> >  }
> >  
> > -static void selinux_fs_info_free(struct super_block *sb)
> > +static void selinux_fs_info_free(struct selinux_fs_info *fsi)
> >  {
> > -	struct selinux_fs_info *fsi = sb->s_fs_info;
> >  	int i;
> >  
> >  	if (fsi) {
> > @@ -107,8 +106,7 @@ static void selinux_fs_info_free(struct super_block *sb)
> >  		kfree(fsi->bool_pending_names);
> >  		kfree(fsi->bool_pending_values);
> >  	}
> > -	kfree(sb->s_fs_info);
> > -	sb->s_fs_info = NULL;
> > +	kfree(fsi);
> >  }
> >  
> >  #define SEL_INITCON_INO_OFFSET		0x01000000
> > @@ -2180,7 +2178,7 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
> >  	pr_err("SELinux: %s:  failed while creating inodes\n",
> >  		__func__);
> >  
> > -	selinux_fs_info_free(sb);
> > +	selinux_fs_info_free(fsi);
> >  
> >  	return ret;
> >  }
> > @@ -2202,8 +2200,10 @@ static int sel_init_fs_context(struct fs_context *fc)
> >  
> >  static void sel_kill_sb(struct super_block *sb)
> >  {
> > -	selinux_fs_info_free(sb);
> > +	struct selinux_fs_info *fsi = sb->s_fs_info;
> > +
> >  	kill_litter_super(sb);
> > +	selinux_fs_info_free(fsi);
> >  }
> 
> A real bug, but an incomplete fix - you've just gotten yourself a double-free;
> failure in sel_fill_super() has no need to do selinux_fs_info_free() now.

Please note the "completely untested" in the draft... ;)
Re: [PATCH] binderfs: rework superblock destruction
Posted by Al Viro 3 years, 7 months ago
On Wed, Aug 17, 2022 at 03:19:13PM +0100, Al Viro wrote:
> On Wed, Aug 17, 2022 at 04:01:49PM +0200, Christian Brauner wrote:
> > On Wed, Aug 17, 2022 at 02:59:02PM +0100, Al Viro wrote:
> > > On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:
> > > 
> > > > +static void binderfs_kill_super(struct super_block *sb)
> > > > +{
> > > > +	struct binderfs_info *info = sb->s_fs_info;
> > > > +
> > > > +	if (info && info->ipc_ns)
> > > > +		put_ipc_ns(info->ipc_ns);
> > > > +
> > > > +	kfree(info);
> > > > +	kill_litter_super(sb);
> > > > +}
> > > 
> > > Other way round, please - shut the superblock down, *then*
> > > free the objects it'd been using.  IOW,
> > 
> > I wondered about that but a lot of places do it the other way around.
> > So maybe the expected order should be documented somewhere.
> 
> ???
> 
> "If you are holding internal references to dentries/inodes/etc., drop them
> first; if you are going to free something that is used by filesystem
> methods, don't do that before the filesystem is shut down"
> 
> That's just common sense...  Which filesystems are doing that "the other
> way around"?

Note that something like e.g. ramfs, where we have a dynamically allocated
object ->s_fs_info is pointing to and gets freed early in their ->kill_sb()
is somewhat misleading - it's used only for two things, one is the
creation of root directory inode (obviously not going to happen at any
point after mount) and another - ->show_options().  By the point we get
around to killing a superblock, it would better *NOT* have mounts pointing
to it that might show up in /proc/mounts and make us call ->show_options().

So there we really know that nothing during the shutdown will even look
at that thing we'd just freed.  Not that there'd ever been a point allocating
it - all that object contains is one unsigned short, so we might as well
just have stored (void *)root_mode in ->s_fs_info.  Oh, well...
Re: [PATCH] binderfs: rework superblock destruction
Posted by Christian Brauner 3 years, 7 months ago
On Wed, Aug 17, 2022 at 03:32:03PM +0100, Al Viro wrote:
> On Wed, Aug 17, 2022 at 03:19:13PM +0100, Al Viro wrote:
> > On Wed, Aug 17, 2022 at 04:01:49PM +0200, Christian Brauner wrote:
> > > On Wed, Aug 17, 2022 at 02:59:02PM +0100, Al Viro wrote:
> > > > On Wed, Aug 17, 2022 at 03:03:06PM +0200, Christian Brauner wrote:
> > > > 
> > > > > +static void binderfs_kill_super(struct super_block *sb)
> > > > > +{
> > > > > +	struct binderfs_info *info = sb->s_fs_info;
> > > > > +
> > > > > +	if (info && info->ipc_ns)
> > > > > +		put_ipc_ns(info->ipc_ns);
> > > > > +
> > > > > +	kfree(info);
> > > > > +	kill_litter_super(sb);
> > > > > +}
> > > > 
> > > > Other way round, please - shut the superblock down, *then*
> > > > free the objects it'd been using.  IOW,
> > > 
> > > I wondered about that but a lot of places do it the other way around.
> > > So maybe the expected order should be documented somewhere.
> > 
> > ???
> > 
> > "If you are holding internal references to dentries/inodes/etc., drop them
> > first; if you are going to free something that is used by filesystem
> > methods, don't do that before the filesystem is shut down"
> > 
> > That's just common sense...  Which filesystems are doing that "the other
> > way around"?
> 
> Note that something like e.g. ramfs, where we have a dynamically allocated
> object ->s_fs_info is pointing to and gets freed early in their ->kill_sb()
> is somewhat misleading - it's used only for two things, one is the
> creation of root directory inode (obviously not going to happen at any
> point after mount) and another - ->show_options().  By the point we get
> around to killing a superblock, it would better *NOT* have mounts pointing
> to it that might show up in /proc/mounts and make us call ->show_options().
> 
> So there we really know that nothing during the shutdown will even look
> at that thing we'd just freed.  Not that there'd ever been a point allocating
> it - all that object contains is one unsigned short, so we might as well
> just have stored (void *)root_mode in ->s_fs_info.  Oh, well...

Binderfs was really the first fs I ever wrote and back then I was trying
to be as close to best practice at possible. One thing I remember being
unclear about was what the best practice for filesystem shutdown would
be. That included ->put_super() vs just ->kill_sb() but also the order
in which kill_litter_super() and sb->s_fs_info cleanup should happen.

For binderfs the order does matter and that's also the reason I
originally decided to use ->put_super() as it's called after
evict_inodes() and gives the required ordering.