* Add more specific error message when request body digest is invalid
This may help other implementors debug their implementation.
* Relax Host parameter requirement to GET requests
The only POST requests processed by Mastodon need objects/actors (including
their host) to be explicitly mentioned in the request's body, so replaying
a legitimate request to another host should not be a security issue.
* Support Digest headers using multiple algorithms or lowercase alogirthm names
- Makes permalink to a toot more easily clickable
- Fix clicking between icon and time in fact clicking the display name
- Fix clicking slightly under time in fact clicking the display name
e.g. if someone on an instance that previously had followers gets mentioned
in a private toot, before this PR, they would not receive a
Collection-Synchronization header and may show the toot to the former followers
in addition to the mentioned person.
* Add support for followers synchronization on the receiving end
Check the `collectionSynchronization` attribute on `Create` and `Announce`
activities and synchronize followers from provided collection if possible.
* Add tests for followers synchronization on the receiving end
* Add support for follower synchronization on the sender's end
* Add tests for the sending end
* Switch from AS attributes to HTTP header
Replace the custom `collectionSynchronization` ActivityStreams attribute by
an HTTP header (`X-AS-Collection-Synchronization`) with the same syntax as
the `Signature` header and the following fields:
- `collectionId` to specify which collection to synchronize
- `digest` for the SHA256 hex-digest of the list of followers known on the
receiving instance (where “receiving instance” is determined by accounts
sharing the same host name for their ActivityPub actor `id`)
- `url` of a collection that should be fetched by the instance actor
Internally, move away from the webfinger-based `domain` attribute and use
account `uri` prefix to group accounts.
* Add environment variable to disable followers synchronization
Since the whole mechanism relies on some new preconditions that, in some
extremely rare cases, might not be met, add an environment variable
(DISABLE_FOLLOWERS_SYNCHRONIZATION) to disable the mechanism altogether and
avoid followers being incorrectly removed.
The current conditions are:
1. all managed accounts' actor `id` and inbox URL have the same URI scheme and
netloc.
2. all accounts whose actor `id` or inbox URL share the same URI scheme and
netloc as a managed account must be managed by the same Mastodon instance
as well.
As far as Mastodon is concerned, breaking those preconditions require extensive
configuration changes in the reverse proxy and might also cause other issues.
Therefore, this environment variable provides a way out for people with highly
unusual configurations, and can be safely ignored for the overwhelming majority
of Mastodon administrators.
* Only set follower synchronization header on non-public statuses
This is to avoid unnecessary computations and allow Follow-related
activities to be handled by the usual codepath instead of going through
the synchronization mechanism (otherwise, any Follow/Undo/Accept activity
would trigger the synchronization mechanism even if processing the activity
itself would be enough to re-introduce synchronization)
* Change how ActivityPub::SynchronizeFollowersService handles follow requests
If the remote lists a local follower which we only know has sent a follow
request, consider the follow request as accepted instead of sending an Undo.
* Integrate review feeback
- rename X-AS-Collection-Synchronization to Collection-Synchronization
- various minor refactoring and code style changes
* Only select required fields when computing followers_hash
* Use actor URI rather than webfinger domain in synchronization endpoint
* Change hash computation to be a XOR of individual hashes
Makes it much easier to be memory-efficient, and avoid sorting discrepancy issues.
* Marginally improve followers_hash computation speed
* Further improve hash computation performances by using pluck_each
* Change how CDN_HOST is passed down to make assets build reproducible
* Change webpacker/webpack configuration to dynamically load publicPath based on meta header
* Fix embedded layout missing the cdn-host meta header
* Add notification permission handling code
* Request notification permission when enabling any notification setting
* Add badge to notification settings when permissions insufficient
* Disable alerts by default, requesting permission and enable them on onboarding
There are edge cases where requests to certain hosts timeout when
using the vanilla HTTP.rb gem, which the goldfinger gem uses. Now
that we no longer need to support OStatus servers, webfinger logic
is so simple that there is no point encapsulating it in a gem, so
we can just use our own Request class. With that, we benefit from
more robust timeout code and IPv4/IPv6 resolution.
Fix#14091
Fixes#14862
This used to be the case until #13987, which introduced a hotkey to toggle
the Content Warning field.
Unfortunately, MacOS relies on the “alt” key for many things, including
composing text (see #14862), therefore, even if that makes the CW toggle
hotkey significantly less useful, it makes sense to not interfere with
composing toots.
* Add bell button
Fix#4890
* Remove duplicate type from post-deployment migration
* Fix legacy class type mappings
* Improve query performance with better index
* Fix validation
* Remove redundant index from notifications
* Add paragraph about browser add-ons when encountering some errors
When a crash is caused by a NotFoundError exception, add a paragraph
to the error page mentioning browser add-ons.
Indeed, crashes with NotFoundError are often caused by browser extensions
messing with the DOM in ways React.JS can't recover from (e.g. issues #13325
and #14731).
* Reword error messages
* Do not serve account actors at all in limited federation mode
When an account is fetched without a signature from an allowed instance,
return an error.
This isn't really an improvement in security, as the only information that was
previously returned was required protocol-level info, and the only personal bit
was the existence of the account. The existence of the account can still be
checked by issuing a webfinger query, as those are accepted without signatures.
However, this change makes it so that unallowed instances won't create account
records on their end when they find a reference to an unknown account.
The previous behavior of rendering a limited list of fields, instead of not
rendering the actor at all, was in order to prevent situations in which two
instances in Authorized Fetch mode or Limited Federation mode would fail to
reach each other because resolving an account would require a signed query…
from an account which can only be fetched with a signed query itself. However,
this should now be fine as fetching accounts is done by signing on behalf of
the special instance actor, which does not require any kind of valid signature
to be fetched.
* Fix tests
* Check for and record reblog info atomically
Instead of using ZREVRANK to determine whether a reblog is a new reblog or not,
use ZADD's NX option to perform the check/addition option atomically.
* Replace ZREVRANK call with ZSCORE key which is more efficient
* Make tests a bit stricter
* Fix off-by-one
* Add database support for list show-reply preferences
* Add backend support to read and update list-specific show_replies settings
* Add basic UI to set list replies setting
* Add specs for list replies policy
* Switch "cycling" reply policy link to a set of radio inputs
* Capitalize replies_policy strings
* Change radio button design to be consistent with that of the directory explorer
* [WiP] Update Tesseract.js
- Update Tesseract.js to 2.2.1
- Use versioned file names
- differentiate two progression states: preparing OCR and detecting picture
* Get rid of copy-webpack-plugin
* Add back "Home" link to "Getting started" when Home column isn't mounted
* Fix keys in getting_started
It should not matter much in practice as the list of items will only
change extremely rarely, but having a `key` that corresponds to the actual
item makes much more sense than having it be the index of the item within
the list.
* Make Array-creation behavior of Paginable more predictable
Paginable.paginate_by_id usually returns ActiveRecord::Relation, but it
returns an Array if min_id option is present. The behavior caused problems
fixed with the following commits:
- 552e886b64
- b63ede5005
- 64ef37b89d
To prevent from recurring similar problems, this commit introduces two
changes:
- The scope now always returns an Array whether min_id option is present
or not.
- The scope is renamed to to_a_paginated_by_id to clarify it returns an
Array.
* Transform Paginable.to_a_paginated_by_id from a scope to a class method
https://api.rubyonrails.org/classes/ActiveRecord/Scoping/Named/ClassMethods.html#method-i-scope
> The method is intended to return an ActiveRecord::Relation object, which
> is composable with other scopes.
Paginable.to_a_paginated_by_id returns an Array and is not appropriate
as a scope.
* Replace incorrect use of distinct with group
Some uses of ActiveRecord::QueryMethods#distinct pass field names but they
are incorrect for the current version of Rails.
ActiveRecord::QueryMethods#group provides the expected behavior and
benefits performance. See commit 6da24aad4cafdef8d8a2c92bac2002a5fc2fe9c8.
* Introduce ApplicationController#cache_collection_paginated_by_id
ApplicationController#cache_collection_paginated_by_id fuses
ApplicationController#cache_collection and Paginable.paginate_by_id.
An advantage of this method is that it prevents from modifying scope which
Paginable.paginate_by_id may provide.
ApplicationController#cache_collection always return an array and there
is no possibility of the scope modification. It is also clear for a
programmer, considering the implication of "cache".
This method can also emit more efficient queries by using
Cacheable.cache_ids before calling Paginable.paginate_by_id.
Some uses of ActiveRecord::QueryMethods#distinct pass field names but they
are incorrect for the current version of Rails.
ActiveRecord::QueryMethods#group provides the expected behavior and
benefits performance. See commit 6da24aad4cafdef8d8a2c92bac2002a5fc2fe9c8.
The old implementation had two queries:
1. The query constructed in Api::V1::FavouritesController#results
2. The query constructed in #cached_favourites, which is merged with 1.
Both of them are issued againt PostgreSQL. The combination of the two
queries caused the following problems:
- The small window between the two queries involves race conditions.
- Minor performance inefficiency.
Moreover, the construction of query 2, which involves merging with query
1 has a bug. Query 1 is finalized with paginate_by_id, but paginate_by_id
returns an array when min_id parameter is specified. The behavior prevents
from merging the query, and in the real world, ActiveRecord simply ignores
the merge (!), which results in querying the entire scan of statuses and
favourites table.
This change fixes these issues by simply letting query 1 get all the works
done.
DISTINCT clause removes duplicated records according to all the selected
attributes. In reality, it can remove duplicated records only looking at
statuses.id, but the clause confuses the query planner and yields
insufficient performance.
The behavior is also problematic if the scope produced by HashQueryService
is used to query columns without id (using pluck method, for example). The
scope is expected to contain unique statuses, but the uniquness will be
evaluated with some arbitrary columns other than id.
GROUP BY clause resolves those problem by explicitly specifying the
column to take into account for the record distinction.
A workaround for the problem of DISTINCT clause in
Api::V1::Timelines::TagController is no longer necessary and removed.
* Add support for latest HTTP Signatures spec draft
https://www.ietf.org/id/draft-ietf-httpbis-message-signatures-00.html
- add support for the “hs2019” signature algorithm (assumed to be equivalent
to RSA-SHA256, since we do not have a mechanism to specify the algorithm
within the key metadata yet)
- add support for (created) and (expires) pseudo-headers and related
signature parameters, when using the hs2019 signature algorithm
- adjust default “headers” parameter while being backwards-compatible with
previous implementation
- change the acceptable time window logic from 12 hours surrounding the “date”
header to accepting signatures created up to 1 hour in the future and
expiring up to 1 hour in the past (but only allowing expiration dates up to
12 hours after the creation date)
This doesn't conform with the current draft, as it doesn't permit accounting
for clock skew.
This, however, should be addressed in a next version of the draft:
https://github.com/httpwg/http-extensions/pull/1235
* Add additional signature requirements
* Rewrite signature params parsing using Parslet
* Make apparent which signature algorithm Mastodon on verification failure
Mastodon uses RSASSA-PKCS1-v1_5, which is not recommended for new applications,
and new implementers may thus unknowingly use RSASSA-PSS.
* Add workaround for PeerTube's invalid signature header
The previous parser allowed incorrect Signature headers, such as
those produced by old versions of the `http-signature` node.js package,
and seemingly used by PeerTube.
This commit adds a workaround for that.
* Fix `signature_key_id` raising an exception
Previously, parsing failures would result in `signature_key_id` being nil,
but the parser changes made that result in an exception.
This commit changes the `signature_key_id` method to return `nil` in case
of parsing failures.
* Move extra HTTP signature helper methods to private methods
* Relax (request-target) requirement to (request-target) || digest
This lets requests from Plume work without lowering security significantly.
Follow-up to #14359
In the case of limited toots, the receiver may not be explicitly part of the
audience. If a specific user's inbox URI was specified, it makes sense to
dereference the toot from the corresponding user, instead of trying to find
someone in the explicit audience.
* feat: add possibility of adding WebAuthn security keys to use as 2FA
This adds a basic UI for enabling WebAuthn 2FA. We did a little refactor
to the Settings page for editing the 2FA methods – now it will list the
methods that are available to the user (TOTP and WebAuthn) and from
there they'll be able to add or remove any of them.
Also, it's worth mentioning that for enabling WebAuthn it's required to
have TOTP enabled, so the first time that you go to the 2FA Settings
page, you'll be asked to set it up.
This work was inspired by the one donde by Github in their platform, and
despite it could be approached in different ways, we decided to go with
this one given that we feel that this gives a great UX.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: add request for WebAuthn as second factor at login if enabled
This commits adds the feature for using WebAuthn as a second factor for
login when enabled.
If users have WebAuthn enabled, now a page requesting for the use of a
WebAuthn credential for log in will appear, although a link redirecting
to the old page for logging in using a two-factor code will also be
present.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: add possibility of deleting WebAuthn Credentials
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: disable WebAuthn when an Admin disables 2FA for a user
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: remove ability to disable TOTP leaving only WebAuthn as 2FA
Following examples form other platforms like Github, we decided to make
Webauthn 2FA secondary to 2FA with TOTP, so that we removed the
possibility of removing TOTP authentication only, leaving users with
just WEbAuthn as 2FA. Instead, users will have to click on 'Disable 2FA'
in order to remove second factor auth.
The reason for WebAuthn being secondary to TOPT is that in that way,
users will still be able to log in using their code from their phone's
application if they don't have their security keys with them – or maybe
even lost them.
* We had to change a little the flow for setting up TOTP, given that now
it's possible to setting up again if you already had TOTP, in order to
let users modify their authenticator app – given that now it's not
possible for them to disable TOTP and set it up again with another
authenticator app.
So, basically, now instead of storing the new `otp_secret` in the
user, we store it in the session until the process of set up is
finished.
This was because, as it was before, when users clicked on 'Edit' in
the new two-factor methods lists page, but then went back without
finishing the flow, their `otp_secret` had been changed therefore
invalidating their previous authenticator app, making them unable to
log in again using TOTP.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* refactor: fix eslint errors
The PR build was failing given that linting returning some errors.
This commit attempts to fix them.
* refactor: normalize i18n translations
The build was failing given that i18n translations files were not
normalized.
This commits fixes that.
* refactor: avoid having the webauthn gem locked to a specific version
* refactor: use symbols for routes without '/'
* refactor: avoid sending webauthn disabled email when 2FA is disabled
When an admins disable 2FA for users, we were sending two mails
to them, one notifying that 2FA was disabled and the other to notify
that WebAuthn was disabled.
As the second one is redundant since the first email includes it, we can
remove it and send just one email to users.
* refactor: avoid creating new env variable for webauthn_origin config
* refactor: improve flash error messages for webauthn pages
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
Before this change:
- unsubscribe() was not called for a disconnection
- It seems that WebSocketClient calls connected() and reconnected().
subscriptionCounters were incremented twice for a single reconnection,
first from connected() and second from reconnected()
This might be a an additional change to
https://github.com/tootsuite/mastodon/pull/14579
to recover subscriptions after a reconnect.
* Increase DNS timeout from 1 second to 5 seconds for MX check
1 seconds is rather short when using a recursive DNS resolver which
hasn't got a cached result already available. Use 5 seconds instead,
which is the timeout value we use for outgoing HTTP queries.
* Add more precise error messages for invalid e-mail addresses
* Fix client-side username validation at registration
It used the Account::USERNAME_RE regexp which is for *remote* users,
local user validation is stricter. Also take into account max username length.
* Add client-side form validation for password change
* Add client-side form validation to dedicated registration form
Previous changes only applied to the /about page, not the dedicated form on
/auth
* Add HTML-level validation of username in sign-up form
* Make required fields with incorrect values more visible
* Enable HTML form validation for the registration form
* Mark agreement checkbox as required client-side
* Add minimum length to password
* Add client-side password confirmation validation
* Fix not handling Undo on some activity types when they aren't inlined
When receiving an Undo for a non-inlined activity, try looking it up in
database using the URI. The queries are ad-hoc because we don't have a global
index of object URIs, and not all activity types are stored in database with
an index on their URI.
Announces are just statuses, and have an index on URIs, so this check can
be done efficiently.
Accepts cannot be handled at all because we don't record their URI at any
point.
Follows don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
Likes don't have an index on URI, they have an index on the issuing account,
but the number of favs per account may be very high, so I decided not to
handle that.
Blocks don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
In all cases, if an Undo could not be handled properly, we call `delete_later!`
because that does not require us to know more than the URI of the undone
property.
* Add tests
* Make newer blocks overwrite older ones
Allows re-synchronizing block info by re-blocking and un-blocking again
when the original Undo Block has been lost.
* Add tests for some cachable responses
This only covers responses that we should have managed to make cachable
so far. It's not the case of all responses that should be cachable in
the end.
* Fix RSS feeds not being cachable
* Add toot send by current user at local state after send a new toot
Related to #14021
* Decrement toot counter at profile when remove a toot
Related to #14021
* Remove semicolon at end of line
That should've worked just fine, but unfortunately, Crowdin wasn't able
to pick up on our non-existent "one" category, thus appending empty
translation block to people's translations. Empty block WILL BE used by
any ICU FormatMessage library, thus resulting in an empty translation
for "one" category, and that requires immediate fix.
This commit duplicates contents of the "other" plural category.
Fixes#14337
The new audio player sets the background and foreground colors automatically
based on the thumbnail of the audio file, but the mastodon-light theme
overrides the controls' colors with a hardcoded color, which sometimes make
them unreadable.
* Fix the local group's followers collection
* Fix to accept followed relayed_through_account
* Add local delivery to the group's followers
* Fix code style
* Revert "Add local delivery to the group's followers"
This reverts commit 3237effc199772e4c4d30f19082cbc5633f56196.
Since #14145, the `set_type_and_extension` has been moved from
`before_post_process` to `before_file_post_process`, but while the former
runs before all validations performed by Paperclip, the latter is dependent
on the order validations and hooks are defined.
In our case, this meant video files could be checked against the generic 10MB
limit, causing validation failures, which, internally, make Paperclip skip
post-processing, and thus, transcoding of the video file.
The actual validation would then happen after the type is correctly set, so
the large file would pass validation, but without being transcoded first.
This commit moves the hook definition so that it is run before checking for
the file size.