use dedicated archives queue
As the used disk space will increase, we want a fined grain control
move zip logic in dedicated method
zip
wip
wip
fix(spec): pass spec in green
tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?
fix(spec): try to retry with grace
clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]
lint(things): everything
In a9a4f6e2a8, a task to migrate
ProcedurePresentation's filters was added.
This task added a "migrated: true" key to all migrated filters.
Now that this task has run, we can safely remove the extra key.
In a previous version of this commit, the migration would fail for
invalid ProcedurePresentation records. This is now fixed.
In a9a4f6e2a8, a task to migrate
ProcedurePresentation's filters was added.
This task added a "migrated: true" key to all migrated filters.
Now that this task has run, we can safely remove the extra key.
- Make `champ.dossier` a requirement;
- Move the dossier_id assignation to `before_validation` (otherwise
the record is invalid, and never gets saved);
- Allow specs to only build the champ (instead of saving it to the
database), which bypasses the requirement to have a dossier.
Fixes a task that since became invalid:
> NameError: uninitialized constant SeekAndDestroyExpiredDossiersJob
> /lib/tasks/deployment/20191203142402_enable_seek_and_destroy_job