As uncovered by xfstests generic/465, concurrent reads and writes
could lead to this,
doRead 3015532: corrupt block #1039: stupidgcm: message authentication failed,
as the read could pick up a block that has not yet been completely written -
write() is not atomic!
Now writes take ContentLock exclusively, while reads take it shared,
meaning that multiple reads can run in parallel with each other, but
not with a write.
This also simplifies the file header locking.
xfstests generic/083 fills the filesystem almost completely while
running fsstress in parallel. In fsck, these would show up:
readFileID 2580: incomplete file, got 18 instead of 19 bytes
This could happen when writing the file header works, but writing
the actual data fails.
Now we kill the header again by truncating the file to zero.
If the underlying filesystem is full, it is normal get ENOSPC here.
Log at Info level instead of Warning.
Fixes xfstests generic/015 and generic/027, which complained about
the extra output.
O_DIRECT accesses must be aligned in both offset and length. Due to our
crypto header, alignment will be off, even if userspace makes aligned
accesses. Running xfstests generic/013 on ext4 used to trigger lots of
EINVAL errors due to missing alignment. Just fall back to buffered IO.
"gocryptfs -fsck" will need access to helper functions,
and to get that, it will need to cast a gofuse.File to a
fusefrontend.File. Make fusefrontend.File exported to make
this work.
We are clean again.
Warnings were:
internal/fusefrontend/fs.go:443:14: should omit type string from declaration
of var cTarget; it will be inferred from the right-hand side
internal/fusefrontend/xattr.go:26:1: comment on exported method FS.GetXAttr
should be of the form "GetXAttr ..."
internal/syscallcompat/sys_common.go:9:7: exported const PATH_MAX should have
comment or be unexported
Reading system.posix_acl_access and system.posix_acl_default
should return EOPNOTSUPP to inform user-space that we do not
support ACLs.
xftestest essientially does
chacl -l | grep "Operation not supported"
to determine if the filesystem supports ACLs, and used to
wrongly believe that gocryptfs does.
mv is unhappy when we return EPERM when it tries to set
system.posix_acl_access:
mv: preserving permissions for ‘b/x’: Operation not permitted
Now we return EOPNOTSUPP like tmpfs does and mv seems happy.
Values a binary-safe, there is no need to base64-encode them.
Old, base64-encoded values are supported transparently
on reading. Writing xattr values now always writes them binary.
We previously returned EPERM to prevent the kernel from
blacklisting our xattr support once we get an unsupported
flag, but this causes lots of trouble on MacOS:
Cannot save files from GUI apps, see
https://github.com/rfjakob/gocryptfs/issues/229
Returning ENOSYS triggers the dotfiles fallback on MacOS
and fixes the issue.
OpenDir and ListXAttr skip over corrupt entries,
readFileID treats files the are too small as empty.
This improves usability in the face of corruption,
but hides the problem in a log message instead of
putting it in the return code.
Create a channel to report these corruptions to fsck
so it can report them to the user.
Also update the manpage and the changelog with the -fsck option.
Closes https://github.com/rfjakob/gocryptfs/issues/191
This should not happen via FUSE as the kernel caps the size,
but with fsck we have the first user that calls Read directly.
For symmetry, check it for Write as well.
A few places have called tlog.Warn.Print, which directly
calls into log.Logger due to embedding, losing all features
of tlog.
Stop embedding log.Logger to make sure the internal functions
cannot be called accidentially and fix (several!) instances
that did.
Both fusefrontend and fusefrontend_reverse were doing
essentially the same thing, move it into main's
initFuseFrontend.
A side-effect is that we have a reference to cryptocore
in main, which will help with wiping the keys on exit
(https://github.com/rfjakob/gocryptfs/issues/211).
Now that we have Fstatat we can use it in Getdents to
get rid of the path name.
Also, add an emulated version of getdents for MacOS. This allows
to drop the !HaveGetdents special cases from fusefrontend.
Modify the getdents test to test both native getdents and the emulated
version.
In PlaintextNames mode the "gocryptfs.longname." prefix does not have any
special meaning. We should not attempt to delete any .name files.
Partially fixes https://github.com/rfjakob/gocryptfs/issues/174
This is already done in regular mode, but was missing when PlaintextNames mode
is enabled. As a result, symlinks created by non-root users were still owned
by root afterwards.
Fixes https://github.com/rfjakob/gocryptfs/issues/176
In PlaintextNames mode the "gocryptfs.longname." prefix does not have any
special meaning. We should not attempt to read the directory IV or to
create special .name files.
Partially fixes https://github.com/rfjakob/gocryptfs/issues/174
If the user manages to replace the directory with
a symlink at just the right time, we could be tricked
into chown'ing the wrong file.
This change fixes the race by using fchownat, which
unfortunately is not available on darwin, hence a compat
wrapper is added.
Scenario, as described by @slackner at
https://github.com/rfjakob/gocryptfs/issues/177 :
1. Create a forward mount point with `plaintextnames` enabled
2. Mount as root user with `allow_other`
3. For testing purposes create a file `/tmp/file_owned_by_root`
which is owned by the root user
4. As a regular user run inside of the GoCryptFS mount:
```
mkdir tempdir
mknod tempdir/file_owned_by_root p &
mv tempdir tempdir2
ln -s /tmp tempdir
```
When the steps are done fast enough and in the right order
(run in a loop!), the device file will be created in
`tempdir`, but the `lchown` will be executed by following
the symlink. As a result, the ownership of the file located
at `/tmp/file_owned_by_root` will be changed.
Fixes https://github.com/rfjakob/gocryptfs/issues/171
Steps to reproduce:
* Create a regular forward mount point
* Create a new directory in the mount point
* Manually delete the gocryptfs.diriv file from the corresponding ciphertext
directory
* Attempt to delete the directory with 'rmdir <dirname>'
Although the code explicitly checks for empty directories, it will still attempt
to move the non-existent gocryptfs.diriv file and fails with:
rmdir: failed to remove '<dirname>': No such file or directory
Fixes https://github.com/rfjakob/gocryptfs/issues/170
Steps to reproduce the problem:
* Create a regular forward mount point
* Create a file with a shortname and one with a long filename
* Try to run 'mv <shortname> <longname>'
This should actually work and replace the existing file, but instead it
fails with:
mv: cannot move '<shortname>' to '<longname>': File exists
The problem is the creation of the .name file. If the target already exists
we can safely ignore the EEXIST error and just keep the existing .name file.
Our byte cache pools are sized acc. to MAX_KERNEL_WRITE, but the
running kernel may have a higher limit set. Clamp to what we can
handle.
Fixes a panic on a Synology NAS reported at
https://github.com/rfjakob/gocryptfs/issues/145
MacOS creates lots of these files, and if the directory is otherwise
empty, we would throw an IO error to the unsuspecting user.
With this patch, we log a warning, but otherwise pretend we did not
see it.
Mitigates https://github.com/rfjakob/gocryptfs/issues/140
...and if Getdents is not available at all.
Due to this warning I now know that SSHFS always returns DT_UNKNOWN:
gocryptfs[8129]: Getdents: convertDType: received DT_UNKNOWN, falling back to Lstat
This behavoir is confirmed at http://ahefner.livejournal.com/16875.html:
"With sshfs, I finally found that obscure case. The dtype is always set to DT_UNKNOWN [...]"
Remove the "Masterkey" field from fusefrontend.Args because it
should not be stored longer than neccessary. Instead pass the
masterkey as a separate argument to the filesystem initializers.
Then overwrite it with zeros immediately so we don't have
to wait for garbage collection.
Note that the crypto implementation still stores at least a
masterkey-derived value, so this change makes it harder, but not
impossible, to extract the encryption keys from memory.
Suggested at https://github.com/rfjakob/gocryptfs/issues/137
Due to RMW, we always need read permissions on the backing file. This is a
problem if the file permissions do not allow reading (i.e. 0200 permissions).
This patch works around that problem by chmod'ing the file, obtaining a fd,
and chmod'ing it back.
Test included.
Issue reported at: https://github.com/rfjakob/gocryptfs/issues/125
Previously we ran through the decryption steps even for an empty
ciphertext slice. The functions handle it correctly, but returning
early skips all the extra calls.
Speeds up the tar extract benchmark by about 4%.
We use two levels of buffers:
1) 4kiB+overhead for each ciphertext block
2) 128kiB+overhead for each FUSE write (32 ciphertext blocks)
This commit adds a sync.Pool for both levels.
The memory-efficiency for small writes could be improved,
as we now always use a 128kiB buffer.
This fixes a few issues I have found reviewing the code:
1) Limit the amount of data ReadLongName() will read. Previously,
you could send gocryptfs into out-of-memory by symlinking
gocryptfs.diriv to /dev/zero.
2) Handle the empty input case in unPad16() by returning an
error. Previously, it would panic with an out-of-bounds array
read. It is unclear to me if this could actually be triggered.
3) Reject empty names after base64-decoding in DecryptName().
An empty name crashes emeCipher.Decrypt().
It is unclear to me if B64.DecodeString() can actually return
a non-error empty result, but let's guard against it anyway.
We do not have to track the writeOnly status because the kernel
will not forward read requests on a write-only FD to us anyway.
I have verified this behavoir manually on a 4.10.8 kernel and also
added a testcase.
Force decode of encrypted files even if the integrity check fails, instead of
failing with an IO error. Warning messages are still printed to syslog if corrupted
files are encountered.
It can be useful to recover files from disks with bad sectors or other corrupted
media.
Closes https://github.com/rfjakob/gocryptfs/pull/102 .
go-fuse has added a new method to the nodefs.File interface that
caused this build error:
internal/fusefrontend/file.go:75: cannot use file literal (type *file) as type nodefs.File in return argument:
*file does not implement nodefs.File (missing Flock method)
Fixes https://github.com/rfjakob/gocryptfs/issues/104 and
prevents the problem from happening again.
The volatile inode numbers that we used before cause "find" to complain and error out.
Virtual inode numbers are derived from their parent file inode number by adding 10^19,
which is hopefully large enough no never cause problems in practice.
If the backing directory contains inode numbers higher than that, stat() on these files
will return EOVERFLOW.
Example directory lising after this change:
$ ls -i
926473 gocryptfs.conf
1000000000000926466 gocryptfs.diriv
944878 gocryptfs.longname.hmZojMqC6ns47eyVxLlH2ailKjN9bxfosi3C-FR8mjA
1000000000000944878 gocryptfs.longname.hmZojMqC6ns47eyVxLlH2ailKjN9bxfosi3C-FR8mjA.name
934408 Tdfbf02CKsTaGVYnAsSypA
Due to kernel readahead, we usually get multiple read requests
at the same time. These get submitted to the backing storage in
random order, which is a problem if seeking is very expensive.
Details: https://github.com/rfjakob/gocryptfs/issues/92
...if doWrite() can do it for us. This avoids the situation
that the file only consists of a file header when calling
doWrite.
A later patch will check for this condition and warn about it,
as with this change it should no longer occour in normal operation.
...but keep it disabled by default for new filesystems.
We are still missing an example filesystem and CLI arguments
to explicitely enable and disable it.
There are two independent backends, one for name encryption,
the other one, AEAD, for file content.
"BackendTypeEnum" only applies to AEAD (file content), so make that
clear in the name.
Version 1.1 of the EME package (github.com/rfjakob/eme) added
a more convenient interface. Use it.
Note that you have to upgrade your EME package (go get -u)!
Preallocation is very slow on hdds that run btrfs. Give the
user the option to disable it. This greatly speeds up small file
operations but reduces the robustness against out-of-space errors.
Also add the option to the man page.
More info: https://github.com/rfjakob/gocryptfs/issues/63
Stat() calls are expensive on NFS as they need a full network
round-trip. We detect when a write immediately follows the
last one and skip the Stat in this case because the write
cannot create a file hole.
On my (slow) NAS, this takes the write speed from 24MB/s to
41MB/s.
Test that we get the right timestamp when extracting a tarball.
Also simplify the workaround in doTestUtimesNano() and fix the
fact that it was running no test at all.
This can happen during normal operation when the directory has
been deleted concurrently. But it can also mean that the
gocryptfs.diriv is missing due to an error, so log the event
at "info" level.
Commit af5441dcd9 has caused a
regression ( https://github.com/rfjakob/gocryptfs/issues/35 )
that is fixed by this commit.
The go-fuse library by now has all the syscall wrappers in
place to correctly handle Utimens, also for symlinks.
Instead of duplicating the effort here just call into go-fuse.
Closes#35
This fixes a build problem on 32-bit hosts:
internal/fusefrontend/file.go:400: cannot use a.Unix() (type int64) as
type int32 in assignment
internal/fusefrontend/file.go:406: cannot use m.Unix() (type int64) as
type int32 in assignment
It also enables full nanosecond timestamps for dates
after 1970.
...and convert all calls to syscall.{Fallocate,Openat}
to syscallcompat .
Both syscalls are not available on OSX. We emulate Openat and just
return EOPNOTSUPP for Fallocate.
unPad16 returns detailed errors including the position of the
incorrect bytes. Kill a possible padding oracle by lumping
everything into a generic error.
The detailed error is only logged if debug is active.
We were growing the file block-by-block which was pretty
inefficient. We now coalesce all the grows into a single
Ftruncate. Also simplifies the code!
Simplistic benchmark: Before:
$ time truncate -s 1000M foo
real 0m0.568s
After:
$ time truncate -s 1000M foo
real 0m0.205s
XFS returns a different error code if you try to overwrite
a non-empty directory with a directory:
XFS: mv: cannot move ‘foo’ to ‘bar/foo’: File exists
ext4: mv: cannot move 'foo' to 'bar/foo': Directory not empty
So have EEXIST trigger the Rmdir logic as well.
Fixes issue #20
Link: https://github.com/rfjakob/gocryptfs/issues/20
The "!fs.args.DirIV" special case was removed by b17f0465c7
but that, by accident, also removed the handling for
PlaintextNames.
Re-add it as an explicit PlaintextNames special case.
Also adds support for removing directories that miss their
gocryptfs.diriv file for some reason.
FUSE filesystems are mounted with "nosuid" by default. If we run as root,
we can use device files by passing the opposite mount option, "suid".
Also we have to use syscall.Chmod instead of os.Chmod because the
portability translation layer "syscallMode" messes up the sgid
and suid bits.
Fixes 70% of the failures in xfstests generic/193. The remaining are
related to truncate, but we err on the safe side:
$ diff -u tests/generic/193.out /home/jakob/src/fuse-xfstests/results//generic/193.out.bad
[...]
check that suid/sgid bits are cleared after successful truncate...
with no exec perm
before: -rwSr-Sr--
-after: -rw-r-Sr--
+after: -rw-r--r--
Support truncate(2) by opening the file and calling ftruncate(2)
While the glibc "truncate" wrapper seems to always use ftruncate, fsstress from
xfstests uses this a lot by calling "truncate64" directly.
tlog is used heavily everywhere and deserves a shorter name.
Renamed using sed magic, without any manual rework:
find * -type f -exec sed -i 's/toggledlog/tlog/g' {} +
Warnings were:
main.go:234: declaration of err shadows declaration at main.go:163:
internal/fusefrontend/file.go:401: declaration of err shadows declaration at internal/fusefrontend/file.go:379:
internal/fusefrontend/file.go:419: declaration of err shadows declaration at internal/fusefrontend/file.go:379:
internal/fusefrontend/fs_dir.go:140: declaration of err shadows declaration at internal/fusefrontend/fs_dir.go:97:
If /proc/self/fd/X did not exist, the actual error is that the file
descriptor was invalid.
go-fuse's pathfs prefers using an open fd even for path-based operations
but does not take any locks to prevent the fd from being closed.
Instead, it retries the operation by path if it get EBADF. So this
change allows the retry logic to work correctly.
This fixes the error
rsync: failed to set times on "/tmp/ping.Kgw.mnt/linux-3.0/[...]/.dvb_demux.c.N7YlEM":
No such file or directory (2)
that was triggered by pingpong-rsync.bash.
We (actually, go-fuse) used to call Chown() instead of Lchown()
which meant that the operation would fail on dangling symlinks.
Fix this by calling os.Lchown() ourself. Also add a test case
for this.
Just presenting an empty directory means that the user does not know
that things went wrong unless he checks the syslog or tries to delete
the directory.
It would be nice to report the error even if only some files were
invalid. However, go-fuse does not allow returning the valid
directory entries AND an error.
... with the "released" boolean.
For some reason, the "f.fd.Fd() < 0" check did not work reliably,
leading to nil pointer panics on the following wlock.lock().
The problem was discovered during fsstress testing and is unlikely
to happen in normal operations.
With this change, we passed 1700+ fsstress iterations.
Paths in statfs() calls were not encrypted resulting in
an Function not implemented error, when the unencrypted
path didn't exist in the underlying (encrypted)
filesystem.
$ df plain/existingdir
df: ‘plain/existingdir’: Function not implemented
Commit 730291feab properly freed wlock when the file descriptor is
closed. However, concurrently running Write and Truncates may
still want to lock it. Check if the fd has been closed first.