Skip to content
#

s3

Here are 3,785 public repositories matching this topic...

rclone
sevospl
sevospl commented Nov 6, 2021

Hi. First of all, kudos to you for the VFS cache - it's really brilliant. I have one request though.

For a little background, as of now I'm using the following settings:

--vfs-cache-mode full \
  --vfs-cache-max-size 500G \
  --vfs-cache-max-age 5000h \
  --vfs-cache-poll-interval 5m \
  --vfs-read-ahead 2G \
  --bwlimit-file 32M \
  --vfs-read-chunk-size=64M \
  --vfs-read-chunk-s
seaweedfs

SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.

  • Updated Jul 25, 2022
  • Go
thanos
jspeedz
jspeedz commented Sep 5, 2019

What went wrong?

I'm getting deprecation warnings with openSSL encryption.

[2019/09/05 08:38:52][info] Using Encryptor::OpenSSL to encrypt the archive.
[2019/09/05 08:40:22][warn] Pipeline STDERR Messages:
[2019/09/05 08:40:22][warn] (Note: may be interleaved if multiple commands returned error messages)
[2019/09/05 08:40:22][warn]
[2019/09/05 08:40:22][warn] *** WARNING : depre

Someone take a look at this! good first issue
Replibyte
moisesrodriguez
moisesrodriguez commented May 17, 2022

Looking at how mysqldump is used, I noticed the flag --skip-extended-insert. Why write INSERT statements using one-row syntax that includes only one record in one statement, if it will be much much slower than having multiple records in one INSERT statement? Specially when importing a lot of data.

enhancement good first issue help wanted
zamazan4ik
zamazan4ik commented Jun 19, 2022

Hi!

Could you please put the information about supported architectures to the documentation please? E.e. about supported architectures for different operating systems, some specific requirements to the supported instructions, if you have any (e.g. maybe AVX is required - I do not know).

This kind of information is important for the end-users.

Thanks in advance!

good first issue
demitri
demitri commented Aug 19, 2021

Problem description

I am getting the following error when reading a file from an S3 bucket:

Invalid bucket name "xxxx:yyyy@bucket": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" or be an ARN matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]acce

Improve this page

Add a description, image, and links to the s3 topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the s3 topic, visit your repo's landing page and select "manage topics."

Learn more