Retry with length 1000 if length 4000 fails, which
should work on all filesystems.
Failure was:
--- FAIL: TestTooLongSymlink (0.00s)
correctness_test.go:198: symlink xxx[...]xxxx /tmp/xfs.mnt/gocryptfs-test-parent/549823072/365091391/TooLongSymlink: file name too long
https://github.com/rfjakob/gocryptfs/issues/267
Error was:
# github.com/rfjakob/gocryptfs/internal/fusefrontend
internal/fusefrontend/fs.go:179: cannot use perms | 256 (type uint16) as type uint32 in argument to syscall.Fchmod
internal/fusefrontend/fs.go:185: cannot use perms (type uint16) as type uint32 in argument to syscall.Fchmod
Instead of reporting the consequence:
matrix_test.go:906: modeHave 0664 != modeWant 0777
Report it if chmod itself fails, and also report the old file mode:
matrix_test.go:901: chmod 000 -> 777 failed: bad file descriptor
The cipherdir path is used as the fsname, as displayed
in "df -T". Now, having a comma in fsname triggers a sanity check
in go-fuse, aborting the mount with:
/bin/fusermount: mount failed: Invalid argument
fuse.NewServer failed: fusermount exited with code 256
Sanitize fsname by replacing any commas with underscores.
https://github.com/rfjakob/gocryptfs/issues/262
Rename openBackingPath to openBackingDir and use OpenDirNofollow
to be safe against symlink races. Note that openBackingDir is
not used in several important code paths like Create().
But it is used in Unlink, and the performance impact in the RM benchmark
to be acceptable:
Before
$ ./benchmark.bash
Testing gocryptfs at /tmp/benchmark.bash.bYO: gocryptfs v1.6-12-g930c37e-dirty; go-fuse v20170619-49-gb11e293; 2018-09-08 go1.10.3
WRITE: 262144000 bytes (262 MB, 250 MiB) copied, 1.07979 s, 243 MB/s
READ: 262144000 bytes (262 MB, 250 MiB) copied, 0.882413 s, 297 MB/s
UNTAR: 16.703
MD5: 7.606
LS: 1.349
RM: 3.237
After
$ ./benchmark.bash
Testing gocryptfs at /tmp/benchmark.bash.jK3: gocryptfs v1.6-13-g84d6faf-dirty; go-fuse v20170619-49-gb11e293; 2018-09-08 go1.10.3
WRITE: 262144000 bytes (262 MB, 250 MiB) copied, 1.06261 s, 247 MB/s
READ: 262144000 bytes (262 MB, 250 MiB) copied, 0.947228 s, 277 MB/s
UNTAR: 17.197
MD5: 7.540
LS: 1.364
RM: 3.410
The function used to do two things:
1) Walk the directory tree in a manner safe from symlink attacks
2) Open the final component in the mode requested by the caller
This change drops (2), which was only used once, and lets the caller
handle it. This simplifies the function and makes it fit for reuse in
forward mode in openBackingPath(), and for using O_PATH on Linux.
These were silently ignored until now (!) but
are rejected by Go 1.11 stdlib.
Drop the flags so the tests work again, until
we figure out a better solution.
https://github.com/golang/go/issues/20130
Show enable_trezor in the version string if we were compiled
with `-tags enable_trezor`. And hide the `-trezor` flag from
the help output if we were not.
The exact ciphertext block number (4KiB granularity) is
already printed in the doRead message. Don't cause
confusion by printing the 128KiB-granularity offset as
well.
doRead 767: corrupt block #4: stupidgcm: message authentication failed
fsck: error reading file "pa/d7/d14/f10c" (inum 767): 5=input/output error
Trying to build with these versions now throws this error:
# golang.org/x/sys/unix
../../../golang.org/x/sys/unix/ioctl.go:18: undefined: runtime.KeepAlive
It looks like x/sys/unix has dropped support for older Go
versions.
As uncovered by xfstests generic/465, concurrent reads and writes
could lead to this,
doRead 3015532: corrupt block #1039: stupidgcm: message authentication failed,
as the read could pick up a block that has not yet been completely written -
write() is not atomic!
Now writes take ContentLock exclusively, while reads take it shared,
meaning that multiple reads can run in parallel with each other, but
not with a write.
This also simplifies the file header locking.
xfstests generic/083 fills the filesystem almost completely while
running fsstress in parallel. In fsck, these would show up:
readFileID 2580: incomplete file, got 18 instead of 19 bytes
This could happen when writing the file header works, but writing
the actual data fails.
Now we kill the header again by truncating the file to zero.