2024-04-29 00:17:15 +02:00
|
|
|
# frozen_string_literal: true
|
|
|
|
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
describe ZxcvbnService do
|
2024-09-16 14:58:37 +02:00
|
|
|
let(:password) { SECURE_PASSWORD }
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
subject(:service) { ZxcvbnService.new(password) }
|
|
|
|
|
|
|
|
describe '#score' do
|
|
|
|
it 'returns the password complexity score' do
|
2024-09-09 15:40:54 +02:00
|
|
|
expect(service.score).to eq 4
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
2024-09-16 14:58:37 +02:00
|
|
|
describe '#complexity for strong password' do
|
|
|
|
it 'returns the password score and length' do
|
|
|
|
expect(service.complexity).to eq [4, 20]
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe '#complexity for not strong password' do
|
|
|
|
let(:password) { 'motdepassefrançais' }
|
|
|
|
it 'returns the password score and length' do
|
|
|
|
expect(service.complexity).to eq [1, 18]
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
describe 'caching' do
|
|
|
|
it 'lazily caches the tester between calls and instances' do
|
|
|
|
allow(Zxcvbn::Tester).to receive(:new).and_call_original
|
|
|
|
allow(YAML).to receive(:safe_load).and_call_original
|
|
|
|
|
|
|
|
first_service = ZxcvbnService.new('some-password')
|
|
|
|
first_service.score
|
|
|
|
first_service.complexity
|
|
|
|
other_service = ZxcvbnService.new('other-password')
|
|
|
|
other_service.score
|
|
|
|
other_service.complexity
|
|
|
|
|
|
|
|
expect(Zxcvbn::Tester).to have_received(:new).at_most(:once)
|
|
|
|
expect(YAML).to have_received(:safe_load).at_most(:once)
|
|
|
|
end
|
|
|
|
|
|
|
|
it 'lazily caches the tester between threads' do
|
|
|
|
allow(Zxcvbn::Tester).to receive(:new).and_call_original
|
|
|
|
|
|
|
|
threads = 1.upto(4).map do
|
|
|
|
Thread.new do
|
|
|
|
ZxcvbnService.new(password).score
|
|
|
|
end
|
|
|
|
end.map(&:join)
|
|
|
|
|
|
|
|
scores = threads.map(&:value)
|
2024-09-09 15:40:54 +02:00
|
|
|
expect(scores).to eq([4, 4, 4, 4])
|
services: cache zxcvbn dictionaries per-thread
Before, every time a password was tested, the dictionaries were parsed
again by zxcvbn.
Parsing dictionaries is slow: it may take up to ~1s. This doesn't matter
that much in production, but it makes tests very slow (because we tend
to create a lot of User records).
With this changes, the initializer tester is shared between calls, class
instances and threads. It is lazily loaded on first use, in order not to
slow down the application boot sequence.
This uses ~20 Mo of memory (only once for all threads), but makes tests
more that twice faster.
For instance, model tests go from **8m 21s** to **3m 26s**.
NB:
An additionnal optimization could be to preload the tester on
boot, before workers are forked, to take advantage of Puma copy-on-write
mechanism. In this way all forked workers would use the same cached
instance.
But:
- We're not actually sure this would work properly. What if Ruby updates
an interval ivar on the class, and this forces the OS to copy the
whole data structure in each fork?
- Puma phased restarts are not compatible with copy-on-write anyway.
So we're avoiding this optimisation for now, and take the extra 20 Mo
per worker.
2021-10-25 11:57:01 +02:00
|
|
|
expect(Zxcvbn::Tester).to have_received(:new).at_most(:once)
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|