2024-04-29 00:17:15 +02:00
|
|
|
|
# frozen_string_literal: true
|
|
|
|
|
|
2019-06-20 00:30:49 +02:00
|
|
|
|
class ZxcvbnService
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
|
@tester_mutex = Mutex.new
|
|
|
|
|
|
2024-09-17 10:59:58 +02:00
|
|
|
|
# Returns an Zxcvbn instance cached between classes instances and between threads.
|
|
|
|
|
#
|
|
|
|
|
# The tester weights ~20 Mo, and we'd like to save some memory – so rather
|
|
|
|
|
# that storing it in a per-thread accessor, we prefer to use a mutex
|
|
|
|
|
# to cache it between threads.
|
|
|
|
|
def self.tester
|
|
|
|
|
@tester_mutex.synchronize do
|
|
|
|
|
@tester ||= Zxcvbn::Tester.new
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
|
end
|
|
|
|
|
end
|
|
|
|
|
|
2024-09-17 10:59:58 +02:00
|
|
|
|
def self.complexity(password)= tester.test(password.to_s).score
|
2019-06-20 00:30:49 +02:00
|
|
|
|
end
|