aboutsummaryrefslogtreecommitdiff
path: root/weed/replication/sink
AgeCommit message (Collapse)AuthorFilesLines
2025-07-16convert error fromating to %w everywhere (#6995)Chris Lu2-3/+3
2025-05-28Add context with request (#6824)Aleksey Kosov5-8/+9
2025-05-22added context to filer_client method calls (#6808)Aleksey Kosov1-4/+4
Co-authored-by: akosov <a.kosov@kryptonite.ru>
2025-01-28fix from ensure() before actual deletion, within the b2 client librarychrislu1-1/+8
fix https://github.com/seaweedfs/seaweedfs/issues/6483
2024-12-19"golang.org/x/exp/slices" => "slices" and go fmtchrislu1-1/+1
2024-07-17Azure sink: avoid overwriting existing fileschrislu1-1/+12
2024-07-16Added tls for http clients (#5766)vadimartynov1-2/+9
* Added global http client * Added Do func for global http client * Changed the code to use the global http client * Fix http client in volume uploader * Fixed pkg name * Fixed http util funcs * Fixed http client for bench_filer_upload * Fixed http client for stress_filer_upload * Fixed http client for filer_server_handlers_proxy * Fixed http client for command_fs_merge_volumes * Fixed http client for command_fs_merge_volumes and command_volume_fsck * Fixed http client for s3api_server * Added init global client for main funcs * Rename global_client to client * Changed: - fixed NewHttpClient; - added CheckIsHttpsClientEnabled func - updated security.toml in scaffold * Reduce the visibility of some functions in the util/http/client pkg * Added the loadSecurityConfig function * Use util.LoadSecurityConfiguration() in NewHttpClient func
2024-07-16[filer.sync] skip overwriting existing fresh entrychrislu1-0/+4
2024-05-23go fmtchrislu1-1/+1
2024-05-14[s3] Fixed s3 replication by sending content-md as base64 (#5596)Martin Stiborský1-1/+2
2024-03-21fix: panic: assignment to entry in nil map on S3Sink.CreateEntry (#5406)Konstantin Lebedev1-7/+12
2024-03-07[filer.backup] add param uploader_part_size for S3sink (#5352)Konstantin Lebedev1-40/+81
* fix: install cronie * chore: refactor configure S3Sink * chore: refactor cinfig * add filer-backup compose file * fix: X-Amz-Meta-Mtime and resolve with comments * fix: attr mtime * fix: MaxUploadPartst is reduced to the maximum allowable * fix: env and force set max MaxUploadParts * fix: env WEED_SINK_S3_UPLOADER_PART_SIZE_MB
2023-10-06Fix filer.backup local sink to propagate file mode changes (#4896)Andrew Garrett1-1/+13
2023-09-27fix: avoid error file name too long when writing a file (#4876)Konstantin Lebedev1-1/+1
2023-01-20grpc connection to filer add sw-client-id headerchrislu2-1/+3
2022-12-20add a simple file replication progress barchrislu1-2/+21
2022-12-19filer sink retries reading file chunks, skipping missing chunkschrislu2-9/+15
if the file chunk is not available during replication time, the file is skipped
2022-11-15refactor filer_pb.Entry and filer.Entry to use GetChunks()chrislu5-10/+10
for later locking on reading chunks
2022-10-28refactor filer proto chunk variable from mtime to modified_ts_nschrislu1-1/+1
2022-10-11fix invalid memory address or nil pointer dereference on filer.syncchrislu1-1/+1
fix https://github.com/seaweedfs/seaweedfs/issues/3826
2022-10-04fix parameterschrislu1-9/+8
2022-10-04filer.sync: limit concurrency when fetching file chunkschrislu2-9/+13
fix https://github.com/seaweedfs/seaweedfs/issues/3787
2022-09-20filer replication: compare content changes directlychrislu1-4/+0
Fix https://github.com/seaweedfs/seaweedfs/issues/3714 The destination chunks may be empty. For example, the file is updated and the volume is vacuumed. In this case, the sync would miss the old chunks. This is fine. However, the entry would have correct metadata but missing chunks. For this case, the simple metadata comparison would be wrongly skipping data changes, and the file will stay empty unless file content md5 is changed.
2022-09-14refactor: `Directory` readability (#3665)Ryan Russell1-1/+1
2022-09-14docs: `replicte` -> `replicate` (#3664)Ryan Russell1-2/+2
2022-09-04filer.backup and filer.sync: include headers during backup and syncchrislu1-0/+1
fix https://github.com/seaweedfs/seaweedfs/issues/3532
2022-08-27simplifychrislu2-26/+14
2022-08-27clean upchrislu2-159/+1
2022-08-26s3 sink use s3 upload managerchrislu1-44/+32
fix https://github.com/seaweedfs/seaweedfs/issues/3531
2022-08-23remove old raft servers if they don't answer to pings for too long (#3398)askeipx1-2/+3
* remove old raft servers if they don't answer to pings for too long add ping durations as options rename ping fields fix some todos get masters through masterclient raft remove server from leader use raft servers to ping them CheckMastersAlive for hashicorp raft only * prepare blocking ping * pass waitForReady as param * pass waitForReady through all functions * waitForReady works * refactor * remove unneeded params * rollback unneeded changes * fix
2022-08-20filer sink: retryable data chunk uploadingchrislu1-52/+29
2022-08-20cleaner codechrislu1-19/+19
2022-08-19filer.backup: backup small files if the file is saved in filer ↵chrislu6-1/+34
(saveToFilerLimit > 0) fix https://github.com/seaweedfs/seaweedfs/issues/3468
2022-08-04filer prefer volume server in same data center (#3405)Konstantin Lebedev2-0/+6
* initial prefer same data center https://github.com/seaweedfs/seaweedfs/issues/3404 * GetDataCenter * prefer same data center for ReplicationSource * GetDataCenterId * remove glog
2022-07-29move to https://github.com/seaweedfs/seaweedfschrislu10-57/+57
2022-06-29use const multipart uploads folderKonstantin Lebedev1-1/+2
avoid error bucket NotEmpty if multipart uploads folder exist
2022-05-11fix compilationchrislu1-1/+1
2022-05-06filer.sync: pass attributes for mountchrislu1-0/+6
fix https://github.com/chrislusf/seaweedfs/issues/3012
2022-02-27ensure compatibilityelee1-1/+3
2022-02-27set canned acl on replication createelee2-1/+6
2022-02-07filer.sync: fix replicating partially updated filechrislu1-1/+1
Run two servers with volumes and fillers: server -dir=Server1alpha -master.port=11000 -filer -filer.port=11001 -volume.port=11002 server -dir=Server1sigma -master.port=11006 -filer -filer.port=11007 -volume.port=11008 Run Active-Passive filler.sync: filer.sync -a localhost:11007 -b localhost:11001 -isActivePassive Upload file to 11007 port: curl -F file=@/Desktop/9.xml "http://localhost:11007/testFacebook/" If we request a file on two servers now, everything will be correct, even if we add data to the file and upload it again: curl "http://localhost:11007/testFacebook/9.xml" EQUALS curl "http://localhost:11001/testFacebook/9.xml" However, if we change the already existing data in the file (for example, we change the first line in the file, reducing its length), then this file on the second server will not be valid and will not be equivalent to the first file Снимок экрана 2022-02-07 в 14 21 11 This problem occurs on line 202 in the filer_sink.go file. In particular, this is due to incorrect mapping of chunk names in the DoMinusChunks function. The names of deletedChunks do not match the chunks of existingEntry.Chunks, since the first chunks come from another server and have a different addressing (name) compared to the addressing on the server where the file is being overwritten. Deleted chunks are not actually deleted on the server to which the file is replicated.
2021-12-26use streaming mode for long poll grpc callschrislu2-6/+6
streaming mode would create separate grpc connections for each call. this is to ensure the long poll connections are properly closed.
2021-09-12change server address from string to a typeChris Lu1-1/+1
2021-09-06refactoringChris Lu1-1/+10
2021-09-01go fmtChris Lu1-3/+3
2021-08-25cloud drive: add support for WasabiChris Lu1-0/+1
* disable md5, sha256 checking to avoid reading one chunk twice * single threaded upload to avoid chunk swapping (to be enhanced later)
2021-08-24update azure libraryChris Lu1-3/+3
2021-08-23cloud drive: s3 configurable force path styleChris Lu1-0/+1
2021-08-23do not force path style for better compatibilityChris Lu1-1/+0
2021-07-26remote.mountChris Lu1-1/+1