2024-04-29 00:17:15 +02:00
|
|
|
|
# frozen_string_literal: true
|
|
|
|
|
|
2019-06-20 00:30:49 +02:00
|
|
|
|
class ZxcvbnService
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
|
@tester_mutex = Mutex.new
|
|
|
|
|
|
|
|
|
|
class << self
|
|
|
|
|
# Returns an Zxcvbn instance cached between classes instances and between threads.
|
|
|
|
|
#
|
|
|
|
|
# The tester weights ~20 Mo, and we'd like to save some memory – so rather
|
|
|
|
|
# that storing it in a per-thread accessor, we prefer to use a mutex
|
|
|
|
|
# to cache it between threads.
|
|
|
|
|
def tester
|
|
|
|
|
@tester_mutex.synchronize do
|
|
|
|
|
@tester ||= build_tester
|
|
|
|
|
end
|
|
|
|
|
end
|
|
|
|
|
|
|
|
|
|
private
|
|
|
|
|
|
|
|
|
|
# Returns a fully initializer tester from the on-disk dictionary.
|
|
|
|
|
#
|
|
|
|
|
# This is slow: loading and parsing the dictionary may take around 1s.
|
|
|
|
|
def build_tester
|
2023-04-19 10:55:16 +02:00
|
|
|
|
dictionaries = YAML.safe_load(Rails.root.join("config", "initializers", "zxcvbn_dictionnaries.yaml").read)
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
|
|
|
|
|
|
tester = Zxcvbn::Tester.new
|
|
|
|
|
tester.add_word_lists(dictionaries)
|
|
|
|
|
tester
|
|
|
|
|
end
|
|
|
|
|
end
|
|
|
|
|
|
2019-06-20 00:30:49 +02:00
|
|
|
|
def initialize(password)
|
|
|
|
|
@password = password
|
|
|
|
|
end
|
|
|
|
|
|
|
|
|
|
def complexity
|
|
|
|
|
wxcvbn = compute_zxcvbn
|
|
|
|
|
score = wxcvbn.score
|
|
|
|
|
length = @password.blank? ? 0 : @password.length
|
2019-09-12 11:26:22 +02:00
|
|
|
|
vulnerabilities = wxcvbn.match_sequence.map { |m| m.matched_word.nil? ? m.token : m.matched_word }.filter { |s| s.length > 2 }.join(', ')
|
2019-06-20 00:30:49 +02:00
|
|
|
|
[score, vulnerabilities, length]
|
|
|
|
|
end
|
|
|
|
|
|
|
|
|
|
def score
|
|
|
|
|
compute_zxcvbn.score
|
|
|
|
|
end
|
|
|
|
|
|
|
|
|
|
private
|
|
|
|
|
|
|
|
|
|
def compute_zxcvbn
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
|
self.class.tester.test(@password)
|
2019-06-20 00:30:49 +02:00
|
|
|
|
end
|
|
|
|
|
end
|