The default limit of 10 retries with exponential backoff meant
that if the S3 server was timing out, you would be stuck with it
for much, much longer than the 5 second read timeout we expect.
The uploading happens within a database transaction, which means
a failing S3 server could negatively affect database performance
It's possible that after commit callbacks were not firing when
exceptions occurred in the process. Also, the default Sidekiq
strategy does not push indexing jobs immediately, which is not
necessary and could be part of the issue too.
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* Add some additional node info stuff
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* expanding this to include federation info
* codeclimate feedback
* CC feedback
* using activeserializers seems like a good idea...
* get rid of draft 2.1 version
* Reimplement 2.1, also fix metaData -> metadata
* Fix metaData -> metadata here too
* Fix nodeinfo 2.1 tests
* Implement cache for monthly user aggregate
* Useless
* Remove ostatus from the list of supported protocols
* Fix nodeinfo's open_registration reading obsolete setting variable
* Only serialize domain blocks with user-facing limitations
* Do not needlessly list noop severity in nodeinfo
* Only serialize domain blocks info in nodeinfo when they are set to be displayed to everyone
* Enable caching for nodeinfo endpoints
* Fix rendering nodeinfo
* CodeClimate fixes
* Please CodeClimate
* Change InstancePresenter#active_user_count_months for clarity
* Refactor NodeInfoSerializer#metadata
* Remove nodeinfo 2.1 support as the schema doesn't exist
* Clean-up
The instrumentation code was used for StatsD metrics collection
prior to the switch to the nsa gem and should have been removed
at that point as it no longer does anything at all
* Rate limit based on remote address IP, not on potential reverse proxy
* Limit rate of unauthenticated API requests further
* Rate-limit paging requests to one every 3 seconds
Deletions take a lot of resources to execute and cause a lot of
federation traffic, so it makes sense to decrease the number
someone can queue up through the API.
30 per 30 minutes
I also added "public" here, as I can't think of a good reason not to add it. Perhaps it has some marginal benefit in that ISPs (or other proxies) can cache it for all users. The assets are certainly publicly available and the same for all users.
* Add REST API for creating an account
The method is available to apps with a token obtained via the client
credentials grant. It creates a user and account records, as well as
an access token for the app that initiated the request. The user is
unconfirmed, and an e-mail is sent as usual.
The method returns the access token, which the app should save for
later. The REST API is not available to users with unconfirmed
accounts, so the app must be smart to wait for the user to click a
link in their e-mail inbox.
The method is rate-limited by IP to 5 requests per 30 minutes.
* Redirect users back to app from confirmation if they were created with an app
* Add tests
* Return 403 on the method if registrations are not open
* Require agreement param to be true in the API when creating an account
Right now, this includes three endpoints: host-meta, webfinger, and change-password.
host-meta and webfinger are publicly available and do not use any authentication. Nothing bad can be done by accessing them in a user's browser.
change-password being CORS-enabled will only reveal the URL it redirects to (which is /auth/edit) but not anything about the actual /auth/edit page, because it does not have CORS enabled.
The documentation for hosting an instance on a different domain should also be updated to point out that Access-Control-Allow-Origin: * should be set at a minimum for the /.well-known/host-meta redirect to allow browser-based non-proxied instance discovery.