This fixes an issue where, by default, links to private attachments are
reported to Matomo.
This is benign: attachments URLs can be filtered out server-side, and
expire after one hour anyway. But we don't want to ship an insecure
configuration by default.
use dedicated archives queue
As the used disk space will increase, we want a fined grain control
move zip logic in dedicated method
zip
wip
wip
fix(spec): pass spec in green
tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?
fix(spec): try to retry with grace
clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]
lint(things): everything
This request currently times out almost every night in production.
It's because although Instructeurs are loaded in batches (default batch
size is 1000), loading all dossiers for 1000 instructeurs is slow.
Turns out the code executed after this query to compute notifications
doesn't even use these dossiers. Indeed it is faster not to preload
them: both the initial query and the total treatment time are shorter.
Here's a quick benchmark made locally (but using production data):
- Before this commit:
Benchmark.measure { pp Instructeur.includes(assign_to: { procedure: :dossiers }).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }
Only the initial query : 35s
Total time : 97s
- Without preloading dossiers:
Benchmark.measure { pp Instructeur.includes(assign_to: :procedure).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }
Only the initial query : 0.08s (400x faster)
Total time : 29s (3,3x faster)
Plus it doesn't timeout, of course.
feat(expiration_banner): enhance wording of expiration
feat(dossiers/expiration_banner): enhance wording regarding expiration to include duree_conservation_dossiers_dans_ds + extension_conservation, also add spec on expiration_banner for instructeur
When a new "Menu" type de champ is added, it comes pre-filled with a
menu title – and nothing else. Which is confusing, and invalid.
Instead pre-fill the type de champ with actual values (no titles).
fix(lint): lint haml
fix(spec): enable flipper and allow procedure to receive flipper check when checking banner presence
fix(doc): add missing documentation on readme regarding system testing with a visual feedback
fix(typo): add missing accent
clean(PR): feedback from Tchak, better to wrap feature check for expirability by procedure within dossier.expirable? helper
tech(question): discard_and_keep_track! ; are we really keeping track with default_scope { kept } ?
feat(stats): add DeletedDossier in Stat computations
Revert "tech(question): discard_and_keep_track! ; are we really keeping track with default_scope { kept } ?"
This reverts commit d1155b7eeaaf1a9f80189e59667e109541fcb089.
feat(stats): support deleted_dossiers for last_four_months_hash and cumulative_hash. extract sanitize query & merge hashes in methdos
clean(rubocop): lint with rubocop
Update db/migrate/20211126080118_add_index_to_deleted_at_to_deleted_dossiers.rb
Co-authored-by: LeSim <mail@simon.lehericey.net>
fix(rubocop): avoid uneeded allocation
fix(migration): add concurrent index with expected synthax
fix(brakeman): add ignore message since group date_trunc evaluation is used by only ourself
i18n(france_connect/*): replace wording with i18n
fix(lint): i18n key issue
secu(views/france_connect/particulier/merge.html.haml): sanitize france_connect_email just in case
fix(brakeman): sanitize FCI.email_france_connect when used with html_safe via an I18n.t, also add exception to brakeman
feat(fci.confirmation_code): add confirmation code to france_connect_informations
feat(user_mailer.france_connect_confirmation_code): add confirmation by email mail method/preview/spec, pointing to merge_mail_with_existing_account (reuse existing method)
feat(mail_merge): mail merge
feat(merge.cannot_use_france_connect): same behaviour as callback
clean(fci.confirmation_code): use same token for mail validation as merge
feat(resend_france_connect/particulier/merge_confirmation): resend email with link. also enhance some trads, cleanup halfy finished refacto
clean(tech): finalize story by plugging merge_with_new_account to email validation
fix(deadspec): was removed
fix(spec): broken after last refactoring
lint(rubocop): space before parenthesis
lint(haml-lint): yoohoooo space before =
fix(lint): scss now :D
Update app/assets/stylesheets/buttons.scss
cleanup
feat(france_connect): re-add confirm by email, with an option for confirmation by email instead of only confirmation by email
fixup! Add confirmation by email when merging DC/FC accounts
fix(lint): haml_spec failure
Deep-cloned objects have all their relationships stale. Thus, for a
newly deep-cloned revision, `revision.types_de_champs` returns `[]`,
even when it actually has associated types de champ.
This causes consecutive champs creations and re-ordering to fail in
subtle ways, like:
```
procedure.draft_revision.add_type_de_champ(…)
procedure.publish_revision!
procedure.draft_revision.add_type_de_champ(…)
procedure.draft_revision.move_type_de_champ(…) # this will fail
```
As `publish_revision!` created a new stale revision, moving the type
de champ fails because not all existing champs are found until the
object is refreshed.
We don't hit this path in production, because usually only a single
operation is made in a request.
To fix this, save the new revision before associating it as the draft
procedure.
(Another option would be to `reload` the revision after creation, but
this seems better contained and matches the name of the method.)
We used to pre-validate the procedure, to display in advance if the path
could be used.
Now that the path autocomplete is long gone, we can remove this kludgy
code.
Currently, when a query can't be parsed, the error is:
- logged to Sentry (which is useless to us),
- returned as a generic 'Internal Server Error' (which is useless to the
user who made the query).
With this commit, the error is instead ignored from our logs (because it
is a user error), but the parse error details are returned to the user,
with the following format:
> {'errors': [{'message': 'Parse error on ")" (RPAREN) at [3, 23]'}]}