xfs
[Top] [All Lists]

Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume

To: "Rafael J. Wysocki" <rjw@xxxxxxx>
Subject: Re: [linux-lvm] 2.6.22-rc5 XFS fails after hibernate/resume
From: David Greaves <david@xxxxxxxxxxxx>
Date: Mon, 02 Jul 2007 15:32:26 +0100
Cc: Tejun Heo <htejun@xxxxxxxxx>, David Chinner <dgc@xxxxxxx>, David Robinson <zxvdr.au@xxxxxxxxx>, LVM general discussion and development <linux-lvm@xxxxxxxxxx>, "'linux-kernel@xxxxxxxxxxxxxxx'" <linux-kernel@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, linux-pm <linux-pm@xxxxxxxxxxxxxx>, LinuxRaid <linux-raid@xxxxxxxxxxxxxxx>
In-reply-to: <200707021608.56616.rjw@sisk.pl>
References: <46744065.6060605@dgreaves.com> <4684C0DD.4080702@dgreaves.com> <4688D9F2.6020000@gmail.com> <200707021608.56616.rjw@sisk.pl>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.4 (X11/20070618)
Rafael J. Wysocki wrote:
On Monday, 2 July 2007 12:56, Tejun Heo wrote:
David Greaves wrote:
Tejun Heo wrote:
It's really weird tho.  The PHY RDY status changed events are coming
from the device which is NOT used while resuming
There is an obvious problem there though Tejun (the errors even when sda
isn't involved in the OS boot) - can I start another thread about that
issue/bug later? I need to reshuffle partitions so I'd rather get the
hibernate working first and then go back to it if that's OK?
Yeah, sure.  The problem is that we don't know whether or how those two
are related.  It would be great if there's a way to verify memory image
read from hibernation is intact.  Rafael, any ideas?

Well, s2disk has an option to compute an MD5 checksum of the image during the hibernation and verify it while reading the image.
(Assuming you mean the mainline version)

Sounds like a good think to try next...
Couldn't see anything on this in ../Documentation/power/*
How do I enable it?


 Still, s2disk/resume
aren't very easy to install  and configure ...

I have it working fine on 2 other machines now so that doesn't appear to be a problem.


David


<Prev in Thread] Current Thread [Next in Thread>