Due to RMW, we always need read permissions on the backing file. This is a
problem if the file permissions do not allow reading (i.e. 0200 permissions).
This patch works around that problem by chmod'ing the file, obtaining a fd,
and chmod'ing it back.
Test included.
Issue reported at: https://github.com/rfjakob/gocryptfs/issues/125
Previously we ran through the decryption steps even for an empty
ciphertext slice. The functions handle it correctly, but returning
early skips all the extra calls.
Speeds up the tar extract benchmark by about 4%.
We use two levels of buffers:
1) 4kiB+overhead for each ciphertext block
2) 128kiB+overhead for each FUSE write (32 ciphertext blocks)
This commit adds a sync.Pool for both levels.
The memory-efficiency for small writes could be improved,
as we now always use a 128kiB buffer.
This fixes a few issues I have found reviewing the code:
1) Limit the amount of data ReadLongName() will read. Previously,
you could send gocryptfs into out-of-memory by symlinking
gocryptfs.diriv to /dev/zero.
2) Handle the empty input case in unPad16() by returning an
error. Previously, it would panic with an out-of-bounds array
read. It is unclear to me if this could actually be triggered.
3) Reject empty names after base64-decoding in DecryptName().
An empty name crashes emeCipher.Decrypt().
It is unclear to me if B64.DecodeString() can actually return
a non-error empty result, but let's guard against it anyway.
We do not have to track the writeOnly status because the kernel
will not forward read requests on a write-only FD to us anyway.
I have verified this behavoir manually on a 4.10.8 kernel and also
added a testcase.
Force decode of encrypted files even if the integrity check fails, instead of
failing with an IO error. Warning messages are still printed to syslog if corrupted
files are encountered.
It can be useful to recover files from disks with bad sectors or other corrupted
media.
Closes https://github.com/rfjakob/gocryptfs/pull/102 .
go-fuse has added a new method to the nodefs.File interface that
caused this build error:
internal/fusefrontend/file.go:75: cannot use file literal (type *file) as type nodefs.File in return argument:
*file does not implement nodefs.File (missing Flock method)
Fixes https://github.com/rfjakob/gocryptfs/issues/104 and
prevents the problem from happening again.
The volatile inode numbers that we used before cause "find" to complain and error out.
Virtual inode numbers are derived from their parent file inode number by adding 10^19,
which is hopefully large enough no never cause problems in practice.
If the backing directory contains inode numbers higher than that, stat() on these files
will return EOVERFLOW.
Example directory lising after this change:
$ ls -i
926473 gocryptfs.conf
1000000000000926466 gocryptfs.diriv
944878 gocryptfs.longname.hmZojMqC6ns47eyVxLlH2ailKjN9bxfosi3C-FR8mjA
1000000000000944878 gocryptfs.longname.hmZojMqC6ns47eyVxLlH2ailKjN9bxfosi3C-FR8mjA.name
934408 Tdfbf02CKsTaGVYnAsSypA
Due to kernel readahead, we usually get multiple read requests
at the same time. These get submitted to the backing storage in
random order, which is a problem if seeking is very expensive.
Details: https://github.com/rfjakob/gocryptfs/issues/92
...if doWrite() can do it for us. This avoids the situation
that the file only consists of a file header when calling
doWrite.
A later patch will check for this condition and warn about it,
as with this change it should no longer occour in normal operation.
...but keep it disabled by default for new filesystems.
We are still missing an example filesystem and CLI arguments
to explicitely enable and disable it.
There are two independent backends, one for name encryption,
the other one, AEAD, for file content.
"BackendTypeEnum" only applies to AEAD (file content), so make that
clear in the name.
Version 1.1 of the EME package (github.com/rfjakob/eme) added
a more convenient interface. Use it.
Note that you have to upgrade your EME package (go get -u)!
Preallocation is very slow on hdds that run btrfs. Give the
user the option to disable it. This greatly speeds up small file
operations but reduces the robustness against out-of-space errors.
Also add the option to the man page.
More info: https://github.com/rfjakob/gocryptfs/issues/63
Stat() calls are expensive on NFS as they need a full network
round-trip. We detect when a write immediately follows the
last one and skip the Stat in this case because the write
cannot create a file hole.
On my (slow) NAS, this takes the write speed from 24MB/s to
41MB/s.