Commit graph

491 commits

Author SHA1 Message Date
simon lehericey
4d3412daf5 batch it 2022-04-12 11:56:56 +02:00
simon lehericey
1dcfb2509f check nonce 2022-04-11 14:40:02 +02:00
simon lehericey
9938586d96 check state 2022-04-11 13:56:27 +02:00
simon lehericey
e2a54e3ee3 extract bill_ids per batch, and catch them all afterwards to avoid duplicate 2022-04-07 12:06:13 +02:00
simon lehericey
f24f6ee105 extract operation_logs_and_signature from pjs_for_dossiers 2022-04-07 12:06:13 +02:00
simon lehericey
c27d9a01f2 fetch documents by batch to avoid huge memory load 2022-04-07 12:06:13 +02:00
simon lehericey
e1afb35ca2 fetch pjs_for_dossier by dossiers
put all bills in bills folder
2022-04-07 12:06:13 +02:00
simon lehericey
11aedb2dc8 fetch pjs_for_commentaires by dossiers 2022-04-07 12:06:13 +02:00
simon lehericey
5631141a46 fetch pjs_for_champ by dossiers 2022-04-07 12:06:13 +02:00
simon lehericey
34b0578d70 pj_and_path only take dossier id 2022-04-07 12:06:13 +02:00
simon lehericey
7ac1288905 pj_service take a dossier collection ! 2022-04-07 12:06:13 +02:00
Martin
9e8807d12a feat(ArchiveUploader.upload_with_chunking_wrapper): retry once on error 2022-04-05 15:11:21 +02:00
Martin
c1884f231c Revert "Merge pull request #7105 from betagouv/US/fix-dossier.processed_in_month"
This reverts commit a0e144b9a7, reversing
changes made to 49848bd150.
2022-04-05 13:39:37 +02:00
Martin
c07e0fc13e fix(Dossier.processed_in_month): ensure proper usage via method sig instead of defensive programming style 2022-04-05 12:14:07 +02:00
simon lehericey
bd0b88a410 move create_list_of_attachments 2022-04-05 11:55:14 +02:00
simon lehericey
1f98f75ccc remove unused method 2022-04-05 11:55:14 +02:00
simon lehericey
f2fea1f882 faster pjs_for_dossier 2022-04-05 11:55:14 +02:00
simon lehericey
62e0553a4e faster operation logs 2022-04-05 11:55:14 +02:00
simon lehericey
437e871f79 extract operation_logs_and_signatures method 2022-04-05 11:55:14 +02:00
simon lehericey
57f9e5bac3 always allow dossier pjs download (-9 queries) 2022-04-04 17:26:49 +02:00
simon lehericey
dca6e65f8d speed up commentaires 2022-04-01 15:51:43 +02:00
simon lehericey
0555ff68cd speed up pjs_for_champs * 10 2022-04-01 15:51:41 +02:00
Martin
dbcf21a555 feat(archive): extract archive status management within job to simplify the main service as well as to isolate this part for a merge with exports csv/xslx [maybe?]
Update app/dashboards/archive_dashboard.rb

Co-authored-by: LeSim <mail@simon.lehericey.net>
2022-03-31 13:35:49 +02:00
Paul Chavard
44c64669e9 Revert "Merge pull request #6787 from tchak/use-vite"
This reverts commit 5d572727b5, reversing
changes made to 43be4482ee.
2022-03-31 12:07:52 +02:00
Martin
ab2caaa5f7 fix(ProcedureArchiveService.zip_root_folder): should take archive instance otherwise when we generate many archive for the same procedure, errors may occures 2022-03-30 16:29:54 +02:00
Paul Chavard
187e84a010 feat(assets): use vitejs to build javascript 2022-03-29 16:27:08 +02:00
Martin
98c1fb8abc feat(archive): drop old feature 2022-03-18 14:26:09 +01:00
Martin
5739150f15 feat(service/archive_uploader): add an archive uploader class to upload files thru a custom script which handle file encryption of massive file (bigger than 4Go)
Update doc/object-storange-and-data-encryption.md

Co-authored-by: LeSim <mail@simon.lehericey.net>

Update app/services/archive_uploader.rb

Co-authored-by: LeSim <mail@simon.lehericey.net>

Update doc/object-storange-and-data-encryption.md

Co-authored-by: Pierre de La Morinerie <kemenaran@gmail.com>

clean(doc): align document file name and document h1

clean(review): refactore based on various comments

clean(review): refactore based on various comments
2022-03-16 14:56:21 +01:00
Paul Chavard
ec2f2dc78c feat(graphql): expose more dossier informations 2022-03-14 15:58:02 +01:00
Paul Chavard
54b559364a feat(dossier): replace discarded with visible_by_administration 2022-03-10 14:29:40 +01:00
Jon
97feca6305 feat(ClamAV): add config to disable clamav usage 2022-02-15 09:15:47 +01:00
Kara Diaby
5d10158fa6 Instructeur : ne peut plus cliquer sur un dossier supprimé dans la recherche 2022-02-03 11:17:39 +01:00
Paul Chavard
3d8471e064 fix(dossier): do not send notification on expiration when dossier is already deleted 2022-01-19 17:52:53 +01:00
Paul Chavard
7937e58caa fix(archives): only export dossiers in archive groupe_instructeurs
fix #6793
2022-01-18 11:16:20 +01:00
Martin
ce1b189dcd refactor(DownloadManager): extract parallel download in dedicated class. move error management in custom class for procedure exports using the didicated class 2022-01-04 16:17:03 +01:00
Martin
2ed9cccba0 extract download all attachments in dedicated class using async/async-http for better perf 2022-01-04 16:17:03 +01:00
Christophe Robillard
882f92268c update zone to procedures 2021-12-16 17:20:06 +01:00
Martin
721d926c0d wip(fix): chdir 2021-12-13 16:37:04 +01:00
mfo
148be50c86 Update app/services/procedure_archive_service.rb
Co-authored-by: LeSim <mail@simon.lehericey.net>
2021-12-13 16:37:04 +01:00
Martin
f7bc387e44 feat(flipper): use new way of generating archive only on some procedure flipped with new actor :zip_using_binary 2021-12-13 16:37:04 +01:00
simon lehericey
f0b0e7fd9a Switch to usage of zip unix binary to create archive. Also use a dedicated queue for DelayedJob
use dedicated archives queue

As the used disk space will increase, we want a fined grain control

move zip logic in dedicated method

zip

wip

wip

fix(spec): pass spec in green

tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use  attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?

fix(spec): try to retry with grace

clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]

lint(things): everything
2021-12-13 16:37:04 +01:00
Pierre de La Morinerie
0e35bc609d notifications: don't preload dossiers on instructeurs
This request currently times out almost every night in production.

It's because although Instructeurs are loaded in batches (default batch
size is 1000), loading all dossiers for 1000 instructeurs is slow.

Turns out the code executed after this query to compute notifications
doesn't even use these dossiers. Indeed it is faster not to preload
them: both the initial query and the total treatment time are shorter.

Here's a quick benchmark made locally (but using production data):

- Before this commit:

	Benchmark.measure { pp Instructeur.includes(assign_to: { procedure: :dossiers }).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }

Only the initial query : 35s
Total time : 97s

- Without preloading dossiers:

	Benchmark.measure { pp Instructeur.includes(assign_to: :procedure).where(assign_tos: { daily_email_notifications_enabled: true }).limit(100).m
ap(&:email_notification_data) }

Only the initial query : 0.08s (400x faster)
Total time : 29s (3,3x faster)

Plus it doesn't timeout, of course.
2021-12-09 12:10:00 +01:00
Martin
1bb868714c fix(spec/lint/review): lint and fix spec of previous commits, also fix based on tchak feedback 2021-12-06 07:05:17 +01:00
Kara Diaby
24ba7b6633 modify dossier projection service 2021-11-26 09:45:13 +01:00
simon lehericey
5234a1854c manage AgentConnect callback 2021-11-23 14:17:59 +01:00
simon lehericey
898df449d4 redirect to AgentConnect 2021-11-23 14:17:59 +01:00
simon lehericey
d2432e34eb AgentConnect UI 2021-11-23 14:17:59 +01:00
Paul Chavard
f6b8689a97 fix(revisions): fix repetitions export with revisions 2021-11-03 18:20:48 +01:00
Paul Chavard
0e2f09dd6f fix(dossiers): wrap dossier discard in a transaction
By doing this we ensure that deleted_dossier are not created when dossier is not discarded
2021-11-02 18:17:35 +01:00
Pierre de La Morinerie
d0e87a08cf services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.

Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).

With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.

This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.

For instance, model tests go from **8m 21s** to **3m 26s**.

NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.

But:

- We're not actually sure this would work properly. What if Ruby updates
  an interval ivar on the class, and this forces the OS to copy the
  whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.

So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 12:04:56 +02:00