Update doc/object-storange-and-data-encryption.md
Co-authored-by: LeSim <mail@simon.lehericey.net>
Update app/services/archive_uploader.rb
Co-authored-by: LeSim <mail@simon.lehericey.net>
Update doc/object-storange-and-data-encryption.md
Co-authored-by: Pierre de La Morinerie <kemenaran@gmail.com>
clean(doc): align document file name and document h1
clean(review): refactore based on various comments
clean(review): refactore based on various comments
use dedicated archives queue
As the used disk space will increase, we want a fined grain control
move zip logic in dedicated method
zip
wip
wip
fix(spec): pass spec in green
tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?
fix(spec): try to retry with grace
clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]
lint(things): everything
This request currently times out almost every night in production.
It's because although Instructeurs are loaded in batches (default batch
size is 1000), loading all dossiers for 1000 instructeurs is slow.
Turns out the code executed after this query to compute notifications
doesn't even use these dossiers. Indeed it is faster not to preload
them: both the initial query and the total treatment time are shorter.
Here's a quick benchmark made locally (but using production data):
- Before this commit:
Benchmark.measure { pp Instructeur.includes(assign_to: { procedure: :dossiers }).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }
Only the initial query : 35s
Total time : 97s
- Without preloading dossiers:
Benchmark.measure { pp Instructeur.includes(assign_to: :procedure).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }
Only the initial query : 0.08s (400x faster)
Total time : 29s (3,3x faster)
Plus it doesn't timeout, of course.
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.