Before, when autosaving a draft, removing a repetition row would
send `_destroy` inputs to the controller – but not remove the row
from the DOM. This led to the `_destroy` inputs being sent again
on the next autosave request, which made the controller raise
(because the row fields were already deleted before).
To fix this, we let the controller response remove the deleted
row(s) from the DOM.
Doing it using a controller response avoids the need to keep track
of operations on the Javascript side: the controller can easily
know which row was just deleted, and emit the relevant changes for
the DOM. This keeps the autosave requests robust: even if a request
is skipped (e.g. because of a network interruption), the next request
will still contain the relevant informations to succeed, and not let the
form in an unstable state.
Fix#5470
When subclassing a JS error, most browsers include the constructor
stacktrace :/
This is an issue, because:
- The stacktrace is deeper than it should be
- The stacktrace reaches into a polyfill for which there is not source
map, which causes Sentry to infer the issue grouping from the JS file
name. And the fingerprinted name changes on each release. So for each
release, the stacktrace is different ; and Sentry can't group issues
properly.
We have quite a lot of `Error reading file` errors when uploading files.
These errors are generated by ActiveStorage `file_checksum.js` component
but it eats the actual reason of errors.
(See https://github.com/rails/rails/blob/5-2-stable/activestorage/app/javascript/activestorage/file_checksum.js#L38)
We can't really override the class to generate better errors, as they
are deeply nested in ActiveStorage class hierarchy, and not exported to
external code.
Instead, we hook into the FileReader event handler, to insert a logger
when this error occur. The original event handler will also still be
called as usual.
This is intended to be temporary. The debug code will be removed once
we get a better idea of what is going on.
A DirectUpload may fail for several reasons, and return many types of
errors (string, xhr response, Error objects, etc).
For convenience, wrap all these errors in a FileUploadError object.
- It makes easier for clients of the Uploader class to handle errors;
- It allows to propagate the error code and failure responsability.
DirectUpload returns errors as strings, including an HTTP status and a
file name (and without a stack trace).
But Sentry groups issues according to the stack trace, and maybe the
error message in last resort.
So we have an issue: as all DirectUpload errors logged by Sentry are
generated on the same line, with random-looking messages, Sentry groups
them either too or too little aggressively.
Instead of creating all the errors on the same line:
- add some `if`s statements to create them on different lines (and so
with different stack traces),
- strip the file name from the error message.
This allows Sentry to group the errors properly, with meaningful error
messages.
When the authenticity token is invalid, the creation of the blob before
the direct upload returns a 422.
In that case, the token will never become valid again, and it is useless
to try again. Don’t show the "Retry" button in this case.
NB: of course the real fix is to understand why the authenticity token
is so often invalid – but this will be for later.
Pooling for attachment status is a background operation. Errors should
not be reported to the user, who didn't even ask for this operation to
take place.
This is why we ignore all errors, whether Javascript exceptions or
network errors.
This helper is:
- no longer used;
- buggy (not all requests increment it);
- discouraged (we should instead match an UI change that signals the end
of an ajax request).
Good riddance.
When navigating away from the page, the field receives the 'focusout'
event – but stops to be present in the DOM.
Thus we need to check that the DOM element is actually present.
When calling `redirect_to` in a Rails controller that emits Javascript,
Rails will emit `Turbolinks.replace(…)` commands. And for this,
Turbolinks needs to be globally available.
Otherwise some champs are created before the previous one has been
saved, and React complains that several champs have the same
`champ-undefined` key.
(Plus it made the tests flaky and unreliable.)
Before the editor attempted to create a default champ as soon as the
list became empty.
This created many race conditions, which made the tests flaky.
Remove this behavior, and add an empty label instead.
When a file is being uploaded asynchronosely, hidden input fields
are created by DirectUpload to contain the result of the upload.
However, before the upload finishes, the value of these inputs is not
meaningful. Moreover, it makes the ActiveRecord signature invalid – thus
preventing drafts from being saved.
Exclude these fields from the auto-save.
Older browsers implemented a different default value for `window.fetch`
credentials: it was omitted unless explicitely set.
So we force the value for these older browsers:
- Firefox 39-60
- Chrome 42-67
- Safari 10.1-11.1.2
See https://github.com/github/fetch#sending-cookies
Before, when the "Add new champ" button was clicked, the new champ
was inserted after the **first** fully visible champ.
That was most of the time unexpected. The correct behavior would be to
insert the new champ after the **last** fully visible champ.
That's what this commit does. Now the "Add new champ" behavior feels
much less confusing.
Under some circumstances (like dispatching events just before a page
navigation), IE 11 events will be dispatched twice by the browser: once
with the payload, and a second time without.
To prevent these errors, ignore the events if the payload is missing.
During integration tests, we don't want to load the tiles from OSM:
- It hits OSM servers during every test run;
- It it slow (Capybara waits for the tiles to be loaded to proceed);
- It makes test time out when tiles cannot be loaded for some reason.
Fix#3913