リモート開発メインのソフトウェア開発企業のエンジニアブログです

Bug in the Ansible’s s3_sync module `file_change_strategy` not working and workaround for it

I’ve described the actual workaround in the very bottom. If you are hasty just jump to: Workaround

As of 2019-03-22 that I’m writing this entry now, I can still see this bug. Although I’ve opened an Issue and a Pull Request for it Ansible team has responded nothing now. So it may not be fixed sooner…

Besides, I have a reproducer for it, you can check it out here: https://github.com/issei-m/ansible-s3-sync-bug-reproducer

What Ansible is

I believe a reader of this entry doesn’t need to know what it is but I try to explain briefly. It’s an infrastructure management tool where you can define the desired server state (e.g. “The user foo who can be logged in with the private key A should exist”) in the Yaml file.
Actually, recently I don’t use it much because of the latest more modern execution environments such as Container, Serverless but I do sometimes even now.

Official Site

s3_sync module

Literally, an Ansible module where you can upload files to your S3 bucket.

Official Document

It can not only upload files, but it can difference check for files between local and remote. So it’s effective because you can avoid futile upload for the same file.

Following is for example definition:

tasks:
  - name: copy file to s3
    s3_sync:
      bucket: "{{ bucket_name }}"
      file_root: "./s3"
      key_prefix: "" # Default
      mode: push
      file_change_strategy: date_size # Default

In this definition, you can sync the all local files in ./s3directory with a certain S3 bucket.
And as I explained above, file_change_strategy is an option where you can specify a strategy (logic) of how to determine whether the file has been changed.

date_size is a default strategy. The document claims: “it will upload if file sizes don’t match or if local file modified date is newer than s3’s version”. Surely it’s an advantage at performance perspective because no files will be upload if no changes exist.

Unfortunately, however, As the title implies the bug I want to share with you today is living in exactly this date_size strategy.

Bug details

Let’s see what will happen along the reproducer I’ve mentioned above.

Clone this repository and execute the following command, then the local file s3/target.txt will be uploaded to specified S3 bucket as the key /target.txt.

$ ansible-playbook playbook.yml --extra-vars 'bucket_name=your-bucket-name'

(!) Replace your-bucket-namewith one your own in mind.

As you can see the result, you can see the file has been uploaded successfully.

TASK [copy file to s3] *********************************************************************************************************************************************************************************
changed: [localhost]

The content of the file was actually Test!!, let’s see how the S3 side is like:

$ aws s3 cp s3://your-bucket-name/target.txt -                            
Test!!

Looks good.

Okay, let’s also see what will happen if we execute the same command again after changed the content of file to Yeah!!.

$ echo 'Yeah!!' > s3/target.txt
$ ansible-playbook playbook.yml --extra-vars 'bucket_name=your-bucket-name'

...

TASK [copy file to s3] *********************************************************************************************************************************************************************************
ok: [localhost]

...

Reported nothing has changed against we expected. Actually, you can confirm that nothing is changed in S3 side:

$ aws s3 cp s3://your-bucket-name/target.txt -
Test!!

However, actually, you’d be able to make a change with it if you updated the content to Yeah!!!which the 1 byte greater than original. (I omit the execution result, though)

Apparently, the file isn’t uploaded if the file size matches between local and remote.

The situation where we update the file as the same size does often exist in particularly some config file (e.g. tweaking some numeric), so it’s inconvenient. Besides, the document claims you can upload the file when the local file is newer than remote regardless of the same file size. There must be something wrong with it.

Looking at the implementation

As of now, the default branch devel which contains the problem looks like:

https://github.com/ansible/ansible/blob/50d56ca89de26317c41c4fb1a777fd7ccdc025a2/lib/ansible/modules/cloud/amazon/s3_sync.py#L439-L440

if local_modified_epoch <= remote_modified_epoch or local_size == remote_size:
    entry['skip_flag'] = True

It says: “If the last-modified-date of the S3 side is newer or both files have the same size, then upload will be skipped”
That’s definitely a bug.

If the default value of entry['skip_flag'] is False, what you should do to implement it as the document says:

if local_modified_epoch <= remote_modified_epoch and local_size == remote_size:
    entry['skip_flag'] = True

Or

entry['skip_flag'] = not (local_size != remote_size or local_modified_epoch > remote_modified_epoch)

I guess the cause of this bug is because the desired behavior is described using a term or which is also used in document. (but and is right)
(And in my opinion, it’s better to avoid using a negative name for a boolean value)

Workaround

To be honest, that the above-mentioned issue is resolved is best but you can now use a workaround, that is specifying file_change_strategy: checksum.
In this strategy, the module can check file sameness between local and remote by comparing an md5 for the content. And fortunately as of now, it looks like no bugs in there, so it’d work perfectly.

You’re interested in the performance side? That’s not a big deal.
Because the logic is using S3 object’s ETag metadata which represents the md5 for the content, so you don’t need to worry about remote file size.
In addition, the local is as well because not so many cases where you need to deal with the too huge file which can take much time to calculate the md5 exist.

So you can use this strategy with no worry.

Conclusion

  • As of 2019-03-22, the strategyfile_change_strategy: date_size is buggy where you cannot update the file which has the same size as the remote
  • You can use checksum strategy instead
  • You should avoid using a negative name for a boolean value (IMO)

← 前の投稿

S3 の特定のフォルダのみにアクセス権限を与える

次の投稿 →

Ansibleのs3_syncモジュールのfile_change_strategy: date_sizeのバグと対処法

コメントを残す