In PlaintextNames mode the "gocryptfs.longname." prefix does not have any
special meaning. We should not attempt to delete any .name files.
Partially fixes https://github.com/rfjakob/gocryptfs/issues/174
This is already done in regular mode, but was missing when PlaintextNames mode
is enabled. As a result, symlinks created by non-root users were still owned
by root afterwards.
Fixes https://github.com/rfjakob/gocryptfs/issues/176
In PlaintextNames mode the "gocryptfs.longname." prefix does not have any
special meaning. We should not attempt to read the directory IV or to
create special .name files.
Partially fixes https://github.com/rfjakob/gocryptfs/issues/174
If the user manages to replace the directory with
a symlink at just the right time, we could be tricked
into chown'ing the wrong file.
This change fixes the race by using fchownat, which
unfortunately is not available on darwin, hence a compat
wrapper is added.
Scenario, as described by @slackner at
https://github.com/rfjakob/gocryptfs/issues/177 :
1. Create a forward mount point with `plaintextnames` enabled
2. Mount as root user with `allow_other`
3. For testing purposes create a file `/tmp/file_owned_by_root`
which is owned by the root user
4. As a regular user run inside of the GoCryptFS mount:
```
mkdir tempdir
mknod tempdir/file_owned_by_root p &
mv tempdir tempdir2
ln -s /tmp tempdir
```
When the steps are done fast enough and in the right order
(run in a loop!), the device file will be created in
`tempdir`, but the `lchown` will be executed by following
the symlink. As a result, the ownership of the file located
at `/tmp/file_owned_by_root` will be changed.
If the symlink target gets too long due to base64 encoding, we should
return ENAMETOOLONG instead of having the kernel reject the data and
returning an I/O error to the user.
Fixes https://github.com/rfjakob/gocryptfs/issues/167
Fixes https://github.com/rfjakob/gocryptfs/issues/168
Steps to reproduce the problem:
* Create a regular reverse mount point
* Create files with the same very long name in multiple directories - so far
everything works as expected, and it will appear with a different name each
time, for example, gocryptfs.longname.A in directory A and
gocryptfs.longname.B in directory B
* Try to access a path with A/gocryptfs.longname.B or B/gocryptfs.longname.A -
this should fail, but it actually works.
The problem is that the longname cache only uses the path as key and not the
dir or divIV. Assume an attacker can directly interact with a reverse mount and
knows the relation longname path -> unencoded path in one directory, it allows
to test if the same unencoded filename appears in any other directory.
Fixes https://github.com/rfjakob/gocryptfs/issues/171
Steps to reproduce:
* Create a regular forward mount point
* Create a new directory in the mount point
* Manually delete the gocryptfs.diriv file from the corresponding ciphertext
directory
* Attempt to delete the directory with 'rmdir <dirname>'
Although the code explicitly checks for empty directories, it will still attempt
to move the non-existent gocryptfs.diriv file and fails with:
rmdir: failed to remove '<dirname>': No such file or directory
Fixes https://github.com/rfjakob/gocryptfs/issues/170
Steps to reproduce the problem:
* Create a regular forward mount point
* Create a file with a shortname and one with a long filename
* Try to run 'mv <shortname> <longname>'
This should actually work and replace the existing file, but instead it
fails with:
mv: cannot move '<shortname>' to '<longname>': File exists
The problem is the creation of the .name file. If the target already exists
we can safely ignore the EEXIST error and just keep the existing .name file.
Allows to use /dev/random for generating the master key instead of the
default Go implementation. When the kernel random generator has been
properly initialized both are considered equally secure, however:
* Versions of Go prior to 1.9 just fall back to /dev/urandom if the
getrandom() syscall would be blocking (Go Bug #19274)
* Kernel versions prior to 3.17 do not support getrandom(), and there
is no check if the random generator has been properly initialized
before reading from /dev/urandom
This is especially useful for embedded hardware with low-entroy. Please
note that generation of the master key might block indefinitely if the
kernel cannot harvest enough entropy.
...to account for unaligned reads.
I have not seen this happen in the wild because the kernel
always seems to issue 4k-aligned requests. But the cost
of the additional block in the pool is low and prevents
a buffer overrun panic when an unaligned read does happen.
Our byte cache pools are sized acc. to MAX_KERNEL_WRITE, but the
running kernel may have a higher limit set. Clamp to what we can
handle.
Fixes a panic on a Synology NAS reported at
https://github.com/rfjakob/gocryptfs/issues/145
A file with a name of exactly 176 bytes length caused this error:
ls: cannot access ./tmp/dsg/sXSGJLTuZuW1FarwIkJs0w/b6mGjdxIRpaeanTo0rbh0A/QjMRrQZC_4WLhmHI1UOBcA/gocryptfs.longname.QV-UipdDXeUVdl05WruoEzBNPrQCfpu6OzJL0_QnDKY: No such file or directory
ls: cannot access ./tmp/dsg/sXSGJLTuZuW1FarwIkJs0w/b6mGjdxIRpaeanTo0rbh0A/QjMRrQZC_4WLhmHI1UOBcA/gocryptfs.longname.QV-UipdDXeUVdl05WruoEzBNPrQCfpu6OzJL0_QnDKY.name: No such file or directory
-????????? ? ? ? ? ? gocryptfs.longname.QV-UipdDXeUVdl05WruoEzBNPrQCfpu6OzJL0_QnDKY
-????????? ? ? ? ? ? gocryptfs.longname.QV-UipdDXeUVdl05WruoEzBNPrQCfpu6OzJL0_QnDKY.name
Root cause was a wrong shortNameMax constant that failed to
account for the obligatory padding byte.
Fix the constant and also expand the TestLongnameStat test case
to test ALL file name lengths from 1-255 bytes.
Fixes https://github.com/rfjakob/gocryptfs/issues/143 .
The encrypt and decrypt path both had a copy that were equivalent
but ordered differently, which was confusing.
Consolidate it in a new dedicated function.
MacOS creates lots of these files, and if the directory is otherwise
empty, we would throw an IO error to the unsuspecting user.
With this patch, we log a warning, but otherwise pretend we did not
see it.
Mitigates https://github.com/rfjakob/gocryptfs/issues/140
...and if Getdents is not available at all.
Due to this warning I now know that SSHFS always returns DT_UNKNOWN:
gocryptfs[8129]: Getdents: convertDType: received DT_UNKNOWN, falling back to Lstat
This behavoir is confirmed at http://ahefner.livejournal.com/16875.html:
"With sshfs, I finally found that obscure case. The dtype is always set to DT_UNKNOWN [...]"
The benchmark that supported the decision for 512-byte
prefetching previously lived outside the repo.
Let's add it where it belongs so it cannot get lost.
The Readdir function provided by os is inherently slow because
it calls Lstat on all files.
Getdents gives us all the information we need, but does not have
a proper wrapper in the stdlib.
Implement the "Getdents()" wrapper function that calls
syscall.Getdents() and parses the returned byte blob to a
fuse.DirEntry slice.
Remove the "Masterkey" field from fusefrontend.Args because it
should not be stored longer than neccessary. Instead pass the
masterkey as a separate argument to the filesystem initializers.
Then overwrite it with zeros immediately so we don't have
to wait for garbage collection.
Note that the crypto implementation still stores at least a
masterkey-derived value, so this change makes it harder, but not
impossible, to extract the encryption keys from memory.
Suggested at https://github.com/rfjakob/gocryptfs/issues/137
* extend the diriv cache to 100 entries
* add special handling for the immutable root diriv
The better cache allows to shed some complexity from the path
encryption logic (parent-of-parent check).
Mitigates https://github.com/rfjakob/gocryptfs/issues/127
Dir is like filepath.Dir but returns "" instead of ".".
This was already implemented in fusefrontend_reverse as saneDir().
We will need it in nametransform for the improved diriv caching.
A directory with a long name has two associated virtual files:
the .name file and the .diriv files.
These used to get the same inode number:
$ ls -di1 * */*
33313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw/gocryptfs.diriv
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw.name
With this change we use another prefix (2 instead of 1) for .name files.
$ ls -di1 * */*
33313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw
1000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw/gocryptfs.diriv
2000000000033313535 gocryptfs.longname.2togDFouca9mrTwtfF1RNW5DZRAQY8alaR7wO_Xd5Zw.name
On MacOS, building and testing without openssl is much easier.
The tests should skip tests that fail because of missing openssl
instead of aborting.
Fixes https://github.com/rfjakob/gocryptfs/issues/123
Fixed by including the correct header. Should work on older openssl
versions as well.
Error was:
locking.go:21: undefined reference to `CRYPTO_set_locking_callback'
Due to RMW, we always need read permissions on the backing file. This is a
problem if the file permissions do not allow reading (i.e. 0200 permissions).
This patch works around that problem by chmod'ing the file, obtaining a fd,
and chmod'ing it back.
Test included.
Issue reported at: https://github.com/rfjakob/gocryptfs/issues/125
Previously we ran through the decryption steps even for an empty
ciphertext slice. The functions handle it correctly, but returning
early skips all the extra calls.
Speeds up the tar extract benchmark by about 4%.
We use two levels of buffers:
1) 4kiB+overhead for each ciphertext block
2) 128kiB+overhead for each FUSE write (32 ciphertext blocks)
This commit adds a sync.Pool for both levels.
The memory-efficiency for small writes could be improved,
as we now always use a 128kiB buffer.
128kiB = 32 x 4kiB pages is the maximum we get from the kernel. Splitting
up smaller writes is probably not worth it.
Parallelism is limited to two for now.
Spawn a worker goroutine that reads the next 512-byte block
while the current one is being drained.
This should help reduce waiting times when /dev/urandom is very
slow (like on Linux 3.16 kernels).
On my machine, reading 512-byte blocks from /dev/urandom
(same via getentropy syscall) is a lot faster in terms of
throughput:
Blocksize Throughput
16 28.18 MB/s
512 83.75 MB/s
For a single-threaded streaming write, this drops the CPU usage of
nonceGenerator.Get to almost 1/3:
flat flat% sum% cum cum%
Before 0 0% 95.08% 0.35s 2.92% github.com/rfjakob/gocryptfs/internal/cryptocore.(*nonceGenerator).Get
After 0.01s 0.092% 92.34% 0.13s 1.20% github.com/rfjakob/gocryptfs/internal/cryptocore.(*nonceGenerator).Get
This change makes the nonce reading single-threaded, which may
hurt massively-parallel writes.
This check would need locking to be multithreading-safe.
But as it is in the fastpath, just remove it.
rand.Read() already guarantees that the value is random.
Travis failed on Go 1.6.3 with this error:
internal/pathiv/pathiv_test.go:20: no args in Error call
This change should solve the problem and provides a better error
message on (real) test failure.
With hard links, the path to a file is not unique. This means
that the ciphertext data depends on the path that is used to access
the files.
Fix that by storing the derived values when we encounter a hard-linked
file. This means that the first path wins.
This fixes a few issues I have found reviewing the code:
1) Limit the amount of data ReadLongName() will read. Previously,
you could send gocryptfs into out-of-memory by symlinking
gocryptfs.diriv to /dev/zero.
2) Handle the empty input case in unPad16() by returning an
error. Previously, it would panic with an out-of-bounds array
read. It is unclear to me if this could actually be triggered.
3) Reject empty names after base64-decoding in DecryptName().
An empty name crashes emeCipher.Decrypt().
It is unclear to me if B64.DecodeString() can actually return
a non-error empty result, but let's guard against it anyway.
When a user calls into a deep directory hierarchy, we often
get a sequence like this from the kernel:
LOOKUP a
LOOKUP a/b
LOOKUP a/b/c
LOOKUP a/b/c/d
The diriv cache was not effective for this pattern, because it
was designed for this:
LOOKUP a/a
LOOKUP a/b
LOOKUP a/c
LOOKUP a/d
By also using the cached entry of the grandparent we can avoid lots
of diriv reads.
This benchmark is against a large encrypted directory hosted on NFS:
Before:
$ time ls -R nfs-backed-mount > /dev/null
real 1m35.976s
user 0m0.248s
sys 0m0.281s
After:
$ time ls -R nfs-backed-mount > /dev/null
real 1m3.670s
user 0m0.217s
sys 0m0.403s
This commit defines all exit codes in one place in the exitcodes
package.
Also, it adds a test to verify the exit code on incorrect
password, which is what SiriKali cares about the most.
Fixes https://github.com/rfjakob/gocryptfs/issues/77 .
Misspell Finds commonly misspelled English words
gocryptfs/internal/configfile/scrypt.go
Line 41: warning: "paramter" is a misspelling of "parameter" (misspell)
gocryptfs/internal/ctlsock/ctlsock_serve.go
Line 1: warning: "implementes" is a misspelling of "implements" (misspell)
gocryptfs/tests/test_helpers/helpers.go
Line 27: warning: "compatability" is a misspelling of "compatibility" (misspell)
This can happen during normal operation, and is harmless since
14038a1644
"fusefrontend: readFileID: reject files that consist only of a header"
causes dormant header-only files to be rewritten on the next write.
We do not have to track the writeOnly status because the kernel
will not forward read requests on a write-only FD to us anyway.
I have verified this behavoir manually on a 4.10.8 kernel and also
added a testcase.
As reported at https://github.com/rfjakob/gocryptfs/issues/105 ,
the "ioutil.WriteFile(file, iv, 0400)" call causes "permissions denied"
errors on an NFSv4 setup.
"strace"ing diriv creation and gocryptfs.conf creation shows this:
conf (works on the user's NFSv4 mount):
openat(AT_FDCWD, "/tmp/a/gocryptfs.conf.tmp", O_WRONLY|O_CREAT|O_EXCL|O_CLOEXEC, 0400) = 3
diriv (fails):
openat(AT_FDCWD, "/tmp/a/gocryptfs.diriv", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0400) = 3
This patch creates the diriv file with the same flags that are used for
creating the conf:
openat(AT_FDCWD, "/tmp/a/gocryptfs.diriv", O_WRONLY|O_CREAT|O_EXCL|O_CLOEXEC, 0400) = 3
Closes https://github.com/rfjakob/gocryptfs/issues/105