demarches-normaliennes/spec/lib/utils/retryable_spec.rb
simon lehericey f0b0e7fd9a Switch to usage of zip unix binary to create archive. Also use a dedicated queue for DelayedJob
use dedicated archives queue

As the used disk space will increase, we want a fined grain control

move zip logic in dedicated method

zip

wip

wip

fix(spec): pass spec in green

tech(improvements): avoid File.delete(folder), favor FileUtils.remove_entry_secure which is safer. Also wrap most of code that open file within blocks so it is cleaned when the block ends. Lastly use  attachement.download to avoid big memory pressure [download in chunk, write in chunk] otherwise big file [124>1GO] are loaded in memory. what if we run multiple jobs/download in parallel ?

fix(spec): try to retry with grace

clean(procedure_archive_service_spec.rb): better retry [avoid to rewrite on open file]

lint(things): everything
2021-12-13 16:37:04 +01:00

36 lines
1.2 KiB
Ruby

describe Utils::Retryable do
Includer = Struct.new(:something) do
include Utils::Retryable
def caller(max_attempt:, errors:)
with_retry(max_attempt: max_attempt, errors: errors) do
yield
end
end
end
subject { Includer.new("test") }
let(:spy) { double() }
describe '#with_retry' do
it 'works while retry count is less than max attempts' do
divider_that_raise_error = 0
divider_that_works = 1
expect(spy).to receive(:divider).and_return(divider_that_raise_error, divider_that_works)
result = subject.caller(max_attempt: 2, errors: [ZeroDivisionError]) { 10 / spy.divider }
expect(result).to eq(10 / divider_that_works)
end
it 're raise error if it occures more than max_attempt' do
expect(spy).to receive(:divider).and_return(0, 0)
expect { subject.caller(max_attempt: 1, errors: [ZeroDivisionError]) { 0 / spy.divider } }
.to raise_error(ZeroDivisionError)
end
it 'does not retry other errors' do
expect(spy).to receive(:divider).and_raise(StandardError).once
expect { subject.caller(max_attempt: 2, errors: [ZeroDivisionError]) { 0 / spy.divider } }
.to raise_error(StandardError)
end
end
end