mirror of
https://framagit.org/tykayn/mastodon.git
synced 2023-08-25 08:33:12 +02:00
c99054ecb2
5 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Eugen Rochko
|
0717d9b3e6 |
Set snowflake IDs for backdated statuses (#5260)
- Rename Mastodon::TimestampIds into Mastodon::Snowflake for clarity - Skip for statuses coming from inbox, aka delivered in real-time - Skip for statuses that claim to be from the future |
||
Eugen Rochko
|
eb5ac23434 |
Clean up code style of Mastodon::TimestampId module (#5232)
* Clean up code style of Mastodon::TimestampId module * Update brakeman config |
||
aschmitz
|
468523f4ad |
Non-Serial ("Snowflake") IDs (#4801)
* Use non-serial IDs This change makes a number of nontrivial tweaks to the data model in Mastodon: * All IDs are now 8 byte integers (rather than mixed 4- and 8-byte) * IDs are now assigned as: * Top 6 bytes: millisecond-resolution time from epoch * Bottom 2 bytes: serial (within the millisecond) sequence number * See /lib/tasks/db.rake's `define_timestamp_id` for details, but note that the purpose of these changes is to make it difficult to determine the number of objects in a table from the ID of any object. * The Redis sorted set used for the feed will have values used to look up toots, rather than scores. This is almost always the same as the existing behavior, except in the case of boosted toots. This change was made because Redis stores scores as double-precision floats, which cannot store the new ID format exactly. Note that this doesn't cause problems with sorting/pagination, because ZREVRANGEBYSCORE sorts lexicographically when scores are tied. (This will still cause sorting issues when the ID gains a new significant digit, but that's extraordinarily uncommon.) Note a couple of tradeoffs have been made in this commit: * lib/tasks/db.rake is used to enforce many/most column constraints, because this commit seems likely to take a while to bring upstream. Enforcing a post-migrate hook is an easier way to maintain the code in the interim. * Boosted toots will appear in the timeline as many times as they have been boosted. This is a tradeoff due to the way the feed is saved in Redis at the moment, but will be handled by a future commit. This would effectively close Mastodon's #1059, as it is a snowflake-like system of generating IDs. However, given how involved the changes were simply within Mastodon, it may have unexpected interactions with some clients, if they store IDs as doubles (or as 4-byte integers). This was a problem that Twitter ran into with their "snowflake" transition, particularly in JavaScript clients that treated IDs as JS integers, rather than strings. It therefore would be useful to test these changes at least in the web interface and popular clients before pushing them to all users. * Fix JavaScript interface with long IDs Somewhat predictably, the JS interface handled IDs as numbers, which in JS are IEEE double-precision floats. This loses some precision when working with numbers as large as those generated by the new ID scheme, so we instead handle them here as strings. This is relatively simple, and doesn't appear to have caused any problems, but should definitely be tested more thoroughly than the built-in tests. Several days of use appear to support this working properly. BREAKING CHANGE: The major(!) change here is that IDs are now returned as strings by the REST endpoints, rather than as integers. In practice, relatively few changes were required to make the existing JS UI work with this change, but it will likely hit API clients pretty hard: it's an entirely different type to consume. (The one API client I tested, Tusky, handles this with no problems, however.) Twitter ran into this issue when introducing Snowflake IDs, and decided to instead introduce an `id_str` field in JSON responses. I have opted to *not* do that, and instead force all IDs to 64-bit integers represented by strings in one go. (I believe Twitter exacerbated their problem by rolling out the changes three times: once for statuses, once for DMs, and once for user IDs, as well as by leaving an integer ID value in JSON. As they said, "If you’re using the `id` field with JSON in a Javascript-related language, there is a very high likelihood that the integers will be silently munged by Javascript interpreters. In most cases, this will result in behavior such as being unable to load or delete a specific direct message, because the ID you're sending to the API is different than the actual identifier associated with the message." [1]) However, given that this is a significant change for API users, alternatives or a transition time may be appropriate. 1: https://blog.twitter.com/developer/en_us/a/2011/direct-messages-going-snowflake-on-sep-30-2011.html * Restructure feed pushes/unpushes This was necessary because the previous behavior used Redis zset scores to identify statuses, but those are IEEE double-precision floats, so we can't actually use them to identify all 64-bit IDs. However, it leaves the code in a much better state for refactoring reblog handling / coalescing. Feed-management code has been consolidated in FeedManager, including: * BatchedRemoveStatusService no longer directly manipulates feed zsets * RemoveStatusService no longer directly manipulates feed zsets * PrecomputeFeedService has moved its logic to FeedManager#populate_feed (PrecomputeFeedService largely made lots of calls to FeedManager, but didn't follow the normal adding-to-feed process.) This has the effect of unifying all of the feed push/unpush logic in FeedManager, making it much more tractable to update it in the future. Due to some additional checks that must be made during, for example, batch status removals, some Redis pipelining has been removed. It does not appear that this should cause significantly increased load, but if necessary, some optimizations are possible in batch cases. These were omitted in the pursuit of simplicity, but a batch_push and batch_unpush would be possible in the future. Tests were added to verify that pushes happen under expected conditions, and to verify reblog behavior (both on pushing and unpushing). In the case of unpushing, this includes testing behavior that currently leads to confusion such as Mastodon's #2817, but this codifies that the behavior is currently expected. * Rubocop fixes I could swear I made these changes already, but I must have lost them somewhere along the line. * Address review comments This addresses the first two comments from review of this feature: https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336735 https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336931 This adds an optional argument to FeedManager#key, the subtype of feed key to generate. It also tests to ensure that FeedManager's settings are such that reblogs won't be tracked forever. * Hardcode IdToBigints migration columns This addresses a comment during review: https://github.com/tootsuite/mastodon/pull/4801#discussion_r139337452 This means we'll need to make sure that all _id columns going forward are bigints, but that should happen automatically in most cases. * Additional fixes for stringified IDs in JSON These should be the last two. These were identified using eslint to try to identify any plain casts to JavaScript numbers. (Some such casts are legitimate, but these were not.) Adding the following to .eslintrc.yml will identify casts to numbers: ~~~ no-restricted-syntax: - warn - selector: UnaryExpression[operator='+'] > :not(Literal) message: Avoid the use of unary + - selector: CallExpression[callee.name='Number'] message: Casting with Number() may coerce string IDs to numbers ~~~ The remaining three casts appear legitimate: two casts to array indices, one in a server to turn an environment variable into a number. * Only implement timestamp IDs for Status IDs Per discussion in #4801, this is only being merged in for Status IDs at this point. We do this in a migration, as there is no longer use for a post-migration hook. We keep the initialization of the timestamp_id function as a Rake task, as it is also needed after db:schema:load (as db/schema.rb doesn't store Postgres functions). * Change internal streaming payloads to stringified IDs as well This is equivalent to 591a9af356faf2d5c7e66e3ec715502796c875cd from #5019, with an extra change for the addition to FeedManager#unpush. * Ensure we have a status_id_seq sequence Apparently this is not a given when specifying a custom ID function, so now we ensure it gets created. This uses the generic version of this function to more easily support adding additional tables with timestamp IDs in the future, although it would be possible to cut this down to a less generic version if necessary. It is only run during db:schema:load or the relevant migration, so the overhead is extraordinarily minimal. * Transition reblogs to new Redis format This provides a one-way migration to transition old Redis reblog entries into the new format, with a separate tracking entry for reblogs. It is not invertible because doing so could (if timestamp IDs are used) require a database query for each status in each users' feed, which is likely to be a significant toll on major instances. * Address review comments from @akihikodaki No functional changes. * Additional review changes * Heredoc cleanup * Run db:schema:load hooks for test in development This matches the behavior in Rails' ActiveRecord::Tasks::DatabaseTasks.each_current_configuration, which would otherwise break `rake db:setup` in development. It also moves some functionality out to a library, which will be a good place to put additional related functionality in the near future. |
||
Daniel Hunsaker
|
9ead3d1cdb |
[nanobox] Adjustments for Nanobox development (#3295)
Because Nanobox doesn't run data components in the same container as the code, there are a few tweaks that need to be made in the configuration to get WebPack to work properly in development mode. The same differences lead to needing to use `DATABASE_URL` by default in the `.env` file for Rails to work correctly. Limitations of our `.env` loader for Node.js mean the `.env` file needs to be compiled everywhere in order to work, so we compile it in development, now, too. Also, all the `.env.production` tweaks have been consolidated into a single command. Finally, since Nanobox actually creates the database when it sets up the database server, using the existence of the database alone to determine whether to migrate or setup is insufficient. So we add a condition to `rake db:migrate:setup` to check whether any migrations have run - if the database doesn't exist yet, `db:setup` will be called; if it does, but no migrations have been run, `db:migrate` and `db:seed` are called instead (the same basic idea as what `db:setup` does, but it skips `db:create`, which will only cause problems with an existing DB); otherwise, only `db:migrate` is called. None of these changes should affect development, and all are designed not to interfere with existing behaviors in other environments. |
||
Daniel Hunsaker
|
256e3adc1d |
Add Support for Nanobox (#1709)
* Nanobox Support
- Added support for running Mastodon using Nanobox, both for local development, and for deployment to production
- Dev mode tested and is working properly
- Deployment is undergoing test as of this writing. If it works, this line will be amended to state success; if not, one or more subsequent commits will provide fixes.
* [nanobox] Resolve Deploy Issues
Everything seems to work except routing to the streaming API. Will investigate with the Nanobox staff and make fix commits if needed.
Changes made:
- Also need `NODE_ENV` in production
- Node runs on `:4000`
- Use `envsubst` to commit `.env.production` values, since `dotEnv` packages don't always support referencing other variables
- Can't precompile assets after `transform` hook, but do this locally so it only has to be done once.
- Rails won't create `production.log` on its own, so we do this ourselves.
- Some `start` commands run from `/data/` for some reason, so use absolute paths in command arguments
* [nanobox] Update Ruby version
* [nanobox] Fix db.rake Ruby code style issues
* [nanobox] Minor Fixes
Some minor adjustments to improve functionality:
- Fixed routing to `web.stream` instances
- Adjust `.env.nanobox` to properly generate a default `SMTP_FROM_ADDRESS` via `envsubst`
- Update Nginx configs to properly support the needed HTTP version and headers for proper functionality (the streaming API doesn't work without some of these settings in place)
* [nanobox] Move usage info to docs repo
* [nanobox] Updates for 1.2.x
- Need to leave out `pkg-config` since Nanobox deploys without Ruby's headers - create a gem group to exclude the gem during Nanobox installs, but allow it to remain part of the default set otherwise
- Update cron jobs to cover new/updated Rake tasks
- Update `.env.nanobox` to include latest defaults and additions
* [nanobox] Fix for nokogumbo, added in 1.3.x
Apparently, nokogumbo (pulled in by sanitize, added with `OEmbed Support for PreviewCard` (#2337) -
|