By default, `has_and_belongs_to_many` properly deletes the record in
the join table.
However, as the association is declared manually with a
`has_many / through`, it doesn't delete the joined record automatically.
As we also lack a foreign-key contraint on the join table, that means
a dangling record remains in the join table.
To fix this, let's declare it a proper `has_and_belongs_to_many`
association, which will let the join record be deleted automatically
on destroy.
By default, `has_and_belongs_to_many` properly deletes the record in
the join table.
However, as the association is declared manually with a
`has_many / through`, it doesn't delete the joined record automatically.
As we also lack a foreign-key contraint on the join table, that means
a dangling record remains in the join table.
To fix this, let's declare it a proper `has_and_belongs_to_many`
association, which will let the join record be deleted automatically
on destroy.
Before this commit, the monthly dossiers count was serialized into the
Stat record using human-formatted dates, as:
```ruby
s.dossiers_in_the_last_4_months = {
"octobre 2021"=>409592,
"novembre 2021"=>497823,
"décembre 2021"=>38170,
"janvier 2022"=>0
}
```
Turns out the ordering of keys in a serialized hash is not guaranteed.
After a round-trip to the database, the keys will be wrongly sorted.
Instead we want to save raw Date objects, which will preserve the
ordering. The date formatting can be applied at display-time by the
controller.
Fix#6848
Context: we want to validate public and private types_de_champ
separately.
Before we validated the whole revision (and then validators themselves
enumerated all champs, public and private).
Now we validate the actual public types_de_champ, which will let us
validate separately the private types_de_champ.
feat(expiration_banner): enhance wording of expiration
feat(dossiers/expiration_banner): enhance wording regarding expiration to include duree_conservation_dossiers_dans_ds + extension_conservation, also add spec on expiration_banner for instructeur
tech(question): discard_and_keep_track! ; are we really keeping track with default_scope { kept } ?
feat(stats): add DeletedDossier in Stat computations
Revert "tech(question): discard_and_keep_track! ; are we really keeping track with default_scope { kept } ?"
This reverts commit d1155b7eeaaf1a9f80189e59667e109541fcb089.
feat(stats): support deleted_dossiers for last_four_months_hash and cumulative_hash. extract sanitize query & merge hashes in methdos
clean(rubocop): lint with rubocop
Update db/migrate/20211126080118_add_index_to_deleted_at_to_deleted_dossiers.rb
Co-authored-by: LeSim <mail@simon.lehericey.net>
fix(rubocop): avoid uneeded allocation
fix(migration): add concurrent index with expected synthax
fix(brakeman): add ignore message since group date_trunc evaluation is used by only ourself
Deep-cloned objects have all their relationships stale. Thus, for a
newly deep-cloned revision, `revision.types_de_champs` returns `[]`,
even when it actually has associated types de champ.
This causes consecutive champs creations and re-ordering to fail in
subtle ways, like:
```
procedure.draft_revision.add_type_de_champ(…)
procedure.publish_revision!
procedure.draft_revision.add_type_de_champ(…)
procedure.draft_revision.move_type_de_champ(…) # this will fail
```
As `publish_revision!` created a new stale revision, moving the type
de champ fails because not all existing champs are found until the
object is refreshed.
We don't hit this path in production, because usually only a single
operation is made in a request.
To fix this, save the new revision before associating it as the draft
procedure.
(Another option would be to `reload` the revision after creation, but
this seems better contained and matches the name of the method.)
Calling business logic in a factory is a code-smell, because it
usually requires the object to be saved into database, and may have
unintended consequences when the business logic is changed.
Also, this allows to just build a published procedure, without saving it
to the database.
instead of looking linked user by email because :
- follows FC recommendation to fetch ds account by openid
- the email is not a valid key as many user can share the same FCI email.
The following scenario is now working
A user A (email: 1@mail.com) uses FC to connect to DS
=> It is connected as 1@mail.com
Another user B (email: generic@mail.com) uses FC to connect
=> It is connected as generic@mail.com
The first user A change its FC email to generic@mail.com and connect to DS
=> It is still connected as 1@mail.com