Glitch::KeywordMute's name is inferred as glitch_keyword_mutes, and in
templates this turns into e.g. settings/glitch/keyword_mutes. Going
along with this convention means a lot of file movement, though, and for
a UI that's as temporary and awkward as this one I think it's less
effort to slap a bunch of as: options everywhere.
We'll do the Right Thing when we build out the API and frontend UI.
Also make the keyword-building methods private: they always probably
should have been private, but now I have encoded enough fun and games
into them that it now seems wrong for them to *not* be private.
It is possible to cache a Regexp object, but I'm not sure what happens
if e.g. that object remains in cache across two different Ruby versions.
Caching a string seems to raise fewer questions.
Ditto for ending with \b.
Consider muting the phrase "(hot take)". I stipulate it is reasonable
to enter this with the default "match whole word" behavior. Under the
old behavior, this would be encoded as
\b\(hot\ take\)\b
However, if \b is before the first character in the string and the first
character in the string is not a word character, then the match will
fail. Ditto for after. In our example, "(" is not a word character, so
this will not match statuses containing "(hot take)", and that's a very
surprising behavior.
To address this, we only add leading and trailing \b to keywords that
start or end with word characters.
There are two motivations for this:
1. It looks like we're going to add other features that require
server-side storage (e.g. user notes).
2. Namespacing glitchsoc modifications is a good idea anyway: even if we
do not end up doing (1), if upstream introduces a keyword-mute feature
that also uses a "KeywordMute" model, we can avoid some merge
conflicts this way and work on the more interesting task of
choosing which implementation to use.
Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai. It's unacceptable to have a feature
that doesn't work as intended for some languages. (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)
There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases. In fact, TR29 states
For example, reliable detection of word boundaries in languages such
as Thai, Lao, Chinese, or Japanese requires the use of dictionary
lookup, analogous to English hyphenation.
So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc). However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.
[1]: https://unicode.org/reports/tr29/https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
This should eventually be accessible via the API and the web frontend,
but I find it easier to set up an editing interface using Rails
templates and the like. We can always take it out if it turns out we
don't need it.
The intent of the previous concatenation was to minimize object
allocations, which can end up being a slow killer. However, it turns
out that under MRI 2.4.x, the shove-strings-in-an-array-and-join method
is not only arguably more common but (in this particular case) actually
allocates *fewer* objects than the string concatenation.
Or, at least, that's what I gather by running this:
words = %w(palmettoes nudged hibernation bullish stockade's tightened Hades
Dixie's formalize superego's commissaries Zappa's viceroy's apothecaries
tablespoonful's barons Chennai tollgate ticked expands)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = String.new.tap do |str|
scoped = KeywordMute.where(account: a)
keywords = scoped.select(:id, :keyword)
count = scoped.count
keywords.find_each.with_index do |kw, index|
str << Regexp.escape(kw.keyword.strip)
str << '|' if index < count - 1
end
end
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
vs this:
words = %w( palmettoes nudged hibernation bullish stockade's tightened Hades Dixie's
formalize superego's commissaries Zappa's viceroy's apothecaries tablespoonful's
barons Chennai tollgate ticked expands
)
a = Account.first
KeywordMute.transaction do
words.each { |w| KeywordMute.create!(keyword: w, account: a) }
GC.start
s1 = GC.stat
re = [].tap do |arr|
KeywordMute.where(account: a).select(:keyword, :id).find_each do |m|
arr << Regexp.escape(m.keyword.strip)
end
end.join('|')
s2 = GC.stat
puts s1.inspect, s2.inspect
raise ActiveRecord::Rollback
end
Using rails r, here is a comparison of the total_allocated_objects and
malloc_increase_bytes GC stat data:
total_allocated_objects malloc_increase_bytes
string concat 3200241 -> 3201428 (+1187) 1176 -> 45216 (44040)
array join 3200380 -> 3201299 (+919) 1176 -> 36448 (35272)
It would also have been valid to get rid of the attr_reader, but I like
being able to reach inside KeywordMute::Matcher without resorting to
instance_variable_get tomfoolery.
A matcher object that builds a match from KeywordMute data and runs it
over text is, in my view, one of the easier ways to write examples for
this sort of thing.
Gist of the proposed keyword mute implementation:
Keyword mutes are represented server-side as one keyword per record.
For each account, there exists a keyword regex that is generated as one
big alternation of all keywords. This regex is cached (in Redis, I
guess) so we can quickly get it when filtering in FeedManager.