xfs
[Top] [All Lists]

Re: Storage server, hung tasks and tracebacks

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: Storage server, hung tasks and tracebacks
From: Brian Candler <B.Candler@xxxxxxxxx>
Date: Tue, 15 May 2012 15:02:37 +0100
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date:from:to :cc:subject:message-id:references:mime-version:content-type :in-reply-to; s=sasl; bh=57VX6tkf1Ib8oOKZg9nez7738QI=; b=WtKKLTc 1DBZZWh7fohOHATzIsTTSmO75DkDkH+9CGUVMoT9O83+3R1eOnXB3rRHVbA9QanE ItDgwQ7Lud5qm1VgA1RtGLabp7Gc7bXJ7AY+Y2JZT7OZ30nbHvBS7PQxl09xa+YM 0rfZipFkEoesZ5BumIuIM1OAmtc7TEhhuJUs=
Domainkey-signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:from:to:cc :subject:message-id:references:mime-version:content-type :in-reply-to; q=dns; s=sasl; b=sBNDPtkkEa4lq9B2KXu4Uym1xhbiOk7Ec m+g37QQWky/hhvrW26ouoncKYUAVc8u0ZL+hzkdif3Ue61aS1BDfBTYtOiwzdyBx gOXfbdS6qDop8AASAU0/ml6lkEXDeShIXw5fCLIy2+GhssrKYxBBpOymGWWYoY1x DrQi8tBt1c=
In-reply-to: <4FA4C321.2070105@xxxxxxxxxxxxxxxxx>
References: <20120502184450.GA2557@xxxxxxxx> <4FA27EF8.6040002@xxxxxxxxxxxxxxxxx> <20120503204157.GC4387@xxxxxxxx> <4FA3047D.8060908@xxxxxxxxxxxxxxxxx> <20120504163237.GA6128@xxxxxxxx> <4FA4C321.2070105@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
Update:

After a week away, I am continuing to try to narrow down the problem of this
system withing hanging I/O.

I can fairly reliably repeat the problem on a system with 24 disks, and I've
embarked on trying some different configs to see what's the simplest way I
can make this die.

During this, I found something of interest: I happened to leave an 'iostat
5' process running, and that hung too.  i.e. ps showed it in 'D+' state, and
it was unkillable.

root        34  0.6  0.0      0     0 ?        D    11:29   1:18 [kswapd0]
root      1258  0.0  0.0  15976   532 ?        Ds   11:29   0:00 
/usr/sbin/irqbalance
root      1421  0.0  0.0      0     0 ?        D    12:49   0:01 [xfsaild/md127]
snmp      1430  0.0  0.0  48608  3440 ?        D    11:29   0:00 
/usr/sbin/snmpd -Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p 
/var/run/snmpd.pid
xxxx      1614  1.1  0.0 378860  3812 pts/1    D+   12:50   1:15 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1669  1.2  0.0 378860  3816 pts/2    D+   12:50   1:21 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1727  0.5  0.0 383424   692 pts/3    Dl+  12:51   0:37 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1782  1.2  0.0 378860  3824 pts/4    D+   12:51   1:20 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1954  0.0  0.0   5912   544 pts/0    D+   12:58   0:00 iostat 5
root      2642  0.2  0.0      0     0 ?        D    13:25   0:09 [kworker/0:1]
root      3233  0.0  0.0   5044   168 ?        Ds   13:50   0:00 /usr/sbin/sshd 
-D -R
xxxx      4648  0.0  0.0   8104   936 pts/6    S+   14:41   0:00 grep 
--color=auto  D
root     29491  0.0  0.0      0     0 ?        D    12:45   0:00 [kworker/1:2]

I wonder if iostat actually communicates with the device driver at all? If
not, then presumably it's looking at some kernel data structure.  Maybe
there is a lock being kept open on that by someone/something.

At the same time, I notice that 'cat /proc/diskstats' still works, and
starting a new 'iostat 5' process works too.

After issuing halt -p I get this:

root        34  0.6  0.0      0     0 ?        D    11:29   1:18 [kswapd0]
root      1258  0.0  0.0  15976   532 ?        Ds   11:29   0:00 
/usr/sbin/irqbalance
root      1421  0.0  0.0      0     0 ?        D    12:49   0:01 [xfsaild/md127]
snmp      1430  0.0  0.0  48608  3440 ?        D    11:29   0:00 
/usr/sbin/snmpd -Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p 
/var/run/snmpd.pid
xxxx      1614  1.0  0.0 378860  3812 pts/1    D+   12:50   1:15 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1669  1.1  0.0 378860  3816 pts/2    D+   12:50   1:21 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1727  0.5  0.0 383424   692 pts/3    Dl+  12:51   0:37 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1782  1.1  0.0 378860  3824 pts/4    D+   12:51   1:20 bonnie++ -d 
/disk/scratch/test -s 16384k -n 98:800k:500k:1000
xxxx      1954  0.0  0.0   5912   544 pts/0    D+   12:58   0:00 iostat 5
root      2642  0.1  0.0      0     0 ?        D    13:25   0:09 [kworker/0:1]
root      3233  0.0  0.0   5044   168 ?        Ds   13:50   0:00 /usr/sbin/sshd 
-D -R
root      4753  0.0  0.0  15056   928 ?        D    14:42   0:00 umount 
/run/rpc_pipefs
root      4828  0.0  0.0   4296   348 ?        D    14:42   0:00 sync
root      4834  0.0  0.0   8100   624 pts/6    R+   14:50   0:00 grep 
--color=auto  D
root     29491  0.0  0.0      0     0 ?        D    12:45   0:00 [kworker/1:2]

I see even umount'ing rpc_pipefs is hanging. So this suggests there's some
sort of global lock involved.

Anyway, I just wonder if this jogs a memory in anyone, as to why iostat
would hang in an unkillable way.

Regards,

Brian.

<Prev in Thread] Current Thread [Next in Thread>