use dedicated archives queue
As the used disk space will increase, we want a fined grain control
move zip logic in dedicated method
zip
wip
wip
fix(spec): pass spec in green
tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?
fix(spec): try to retry with grace
clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]
lint(things): everything
Follow-up of #5953.
Refactor the concerns with two goals:
- Getting closer from the way ActiveStorage adds its own hooks.
Usually ActiveStorage does this using an `Attachment#after_create`
hook, which then delegates to the blob to enqueue the job.
- Enqueuing each job only once. By hooking on `Attachment#after_create`,
we guarantee each job will be added only once.
We then let the jobs themselves check if they are relevant or not, and
retry or discard themselves if necessary.
We also need to update the tests a bit, because Rails'
`perform_enqueued_jobs(&block)` test helper doesn't honor the `retry_on`
clause of jobs. Instead it forwards the exception to the caller – which
makes the test fail.
Instead we use the inline version of `perform_enqueued_jobs()`, without
a block, which properly ignores errors catched by retry_on.
We have errors in production where the job starts correctly (i.e. the
blob exists), but `blob.open` fails with a `ActiveStorage::FileNotFound`
error.
When checking later in production, the blob has been deleted.
This points to the blob (and the file) being deleted during the virus
scan job.
In that case, ignore the error (rather than retrying the job).