On créé l'établissement uniquement avec le SIRET, sans que ce soit bloquant
pour compléter le dossier. On demande à l'utilisateur de vérifier
lui-même la concordance du SIRET avec son entreprise.
Cf #7766
Update doc/object-storange-and-data-encryption.md
Co-authored-by: LeSim <mail@simon.lehericey.net>
Update app/services/archive_uploader.rb
Co-authored-by: LeSim <mail@simon.lehericey.net>
Update doc/object-storange-and-data-encryption.md
Co-authored-by: Pierre de La Morinerie <kemenaran@gmail.com>
clean(doc): align document file name and document h1
clean(review): refactore based on various comments
clean(review): refactore based on various comments
use dedicated archives queue
As the used disk space will increase, we want a fined grain control
move zip logic in dedicated method
zip
wip
wip
fix(spec): pass spec in green
tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?
fix(spec): try to retry with grace
clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]
lint(things): everything
Calling business logic in a factory is a code-smell, because it
usually requires the object to be saved into database, and may have
unintended consequences when the business logic is changed.
Also, this allows to just build a published procedure, without saving it
to the database.
This fix prevent repetition children types de champ from being pulled from cloned procedures. stable_id is stable across revisions but also across cloned procedures.
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
Use of one-dimension arrays comparison & `contain_exactly` RSpec matcher
to avoid this behaviour:
Failures:
1) InstructeursImportService#import when an email is malformed ignores or corrects
Failure/Error:
expect(procedure_groupes).to match_array([
["Occitanie", ["paul@mccartney.uk", "ringo@starr.uk"]],
["défaut", []]
])
expected collection contained: [["Occitanie", ["paul@mccartney.uk", "ringo@starr.uk"]], ["défaut", []]]
actual collection contained: [["Occitanie", ["ringo@starr.uk", "paul@mccartney.uk"]], ["défaut", []]]
the missing elements were: [["Occitanie", ["paul@mccartney.uk", "ringo@starr.uk"]]]
the extra elements were: [["Occitanie", ["ringo@starr.uk", "paul@mccartney.uk"]]]
# ./spec/services/instructeurs_import_service_spec.rb:70:in `block (4 levels) in <main>'
Follow-up of #5953.
Refactor the concerns with two goals:
- Getting closer from the way ActiveStorage adds its own hooks.
Usually ActiveStorage does this using an `Attachment#after_create`
hook, which then delegates to the blob to enqueue the job.
- Enqueuing each job only once. By hooking on `Attachment#after_create`,
we guarantee each job will be added only once.
We then let the jobs themselves check if they are relevant or not, and
retry or discard themselves if necessary.
We also need to update the tests a bit, because Rails'
`perform_enqueued_jobs(&block)` test helper doesn't honor the `retry_on`
clause of jobs. Instead it forwards the exception to the caller – which
makes the test fail.
Instead we use the inline version of `perform_enqueued_jobs()`, without
a block, which properly ignores errors catched by retry_on.