diff options
| author | Chris Lu <chrislusf@users.noreply.github.com> | 2025-12-05 15:39:26 -0800 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-12-05 15:39:26 -0800 |
| commit | f1384108e8559e08d4c8c9dc4d7d12b61a79e0b5 (patch) | |
| tree | f11ffce6641bd61f243369a60e70a96d9bcaea66 /weed | |
| parent | c0dad091f149d80c6737f006c7ab98f4cd69478b (diff) | |
| download | seaweedfs-f1384108e8559e08d4c8c9dc4d7d12b61a79e0b5.tar.xz seaweedfs-f1384108e8559e08d4c8c9dc4d7d12b61a79e0b5.zip | |
fix: Admin UI file browser uses https.client TLS config for filer communication (#7633)
* fix: Admin UI file browser uses https.client TLS config for filer communication
When filer is configured with HTTPS (https.filer section in security.toml),
the Admin UI file browser was still using plain HTTP for file uploads,
downloads, and viewing. This caused TLS handshake errors:
'http: TLS handshake error: client sent an HTTP request to an HTTPS server'
This fix:
- Updates FileBrowserHandlers to use the HTTPClient from weed/util/http/client
which properly loads TLS configuration from https.client section
- The HTTPClient automatically uses HTTPS when https.client.enabled=true
- All file operations (upload, download, view) now respect TLS configuration
- Falls back to plain HTTP if TLS client creation fails
Fixes #7631
* fix: Address code review comments
- Fix fallback client Transport wiring (properly assign transport to http.Client)
- Use per-operation timeouts instead of unified 60s timeout:
- uploadFileToFiler: 60s (for large file uploads)
- ViewFile: 30s (original timeout)
- isLikelyTextFile: 10s (original timeout)
* fix: Proxy file downloads through Admin UI for mTLS support
The DownloadFile function previously used browser redirect, which would
fail when filer requires mutual TLS (client certificates) since the
browser doesn't have these certificates.
Now the Admin UI server proxies the download, using its TLS-aware HTTP
client with the configured client certificates, then streams the
response to the browser.
* fix: Ensure HTTP response body is closed on non-200 responses
In ViewFile, the response body was only closed on 200 OK paths,
which could leak connections on non-200 responses. Now the body
is always closed via defer immediately after checking err == nil,
before checking the status code.
* refactor: Extract fetchFileContent helper to reduce nesting in ViewFile
Extracted the deeply nested file fetch logic (7+ levels) into a
separate fetchFileContent helper method. This improves readability
while maintaining the same TLS-aware behavior and error handling.
* refactor: Use idiomatic Go error handling in fetchFileContent
Changed fetchFileContent to return (string, error) instead of
(content string, reason string) for idiomatic Go error handling.
This enables error wrapping and standard 'if err != nil' checks.
Also improved error messages to be more descriptive for debugging,
including the HTTP status code and response body on non-200 responses.
* refactor: Extract newClientWithTimeout helper to reduce code duplication
- Added newClientWithTimeout() helper method that creates a temporary
http.Client with the specified timeout, reusing the TLS transport
- Updated uploadFileToFiler, fetchFileContent, DownloadFile, and
isLikelyTextFile to use the new helper
- Improved error message in DownloadFile to include response body
for better debuggability (consistent with fetchFileContent)
* fix: Address CodeRabbit review comments
- Fix connection leak in isLikelyTextFile: ensure resp.Body.Close()
is called even when status code is not 200
- Use http.NewRequestWithContext in DownloadFile so the filer request
is cancelled when the client disconnects, improving resource cleanup
* fix: Escape Content-Disposition filename per RFC 2616
Filenames containing quotes, backslashes, or special characters could
break the Content-Disposition header or cause client-side parsing issues.
Now properly escapes these characters before including in the header.
* fix: Handle io.ReadAll errors when reading error response bodies
In fetchFileContent and DownloadFile, the error from io.ReadAll was
ignored when reading the filer's error response body. Now properly
handles these errors to provide complete error messages.
* fix: Fail fast when TLS client creation fails
If TLS is enabled (https.client.enabled=true) but misconfigured,
fail immediately with glog.Fatalf rather than silently falling back
to plain HTTP. This prevents confusing runtime errors when the filer
only accepts HTTPS connections.
* fix: Use mime.FormatMediaType for RFC 6266 compliant Content-Disposition
Replace manual escaping with mime.FormatMediaType which properly handles
non-ASCII characters and special characters per RFC 6266, ensuring
correct filename display for international users.
Diffstat (limited to 'weed')
| -rw-r--r-- | weed/admin/handlers/file_browser_handlers.go | 218 |
1 files changed, 165 insertions, 53 deletions
diff --git a/weed/admin/handlers/file_browser_handlers.go b/weed/admin/handlers/file_browser_handlers.go index a0427e39f..bafaa60c3 100644 --- a/weed/admin/handlers/file_browser_handlers.go +++ b/weed/admin/handlers/file_browser_handlers.go @@ -5,6 +5,7 @@ import ( "context" "fmt" "io" + "mime" "mime/multipart" "net" "net/http" @@ -20,15 +21,36 @@ import ( "github.com/seaweedfs/seaweedfs/weed/admin/view/layout" "github.com/seaweedfs/seaweedfs/weed/glog" "github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" + "github.com/seaweedfs/seaweedfs/weed/util/http/client" ) type FileBrowserHandlers struct { adminServer *dash.AdminServer + httpClient *client.HTTPClient } func NewFileBrowserHandlers(adminServer *dash.AdminServer) *FileBrowserHandlers { + // Create HTTP client with TLS support from https.client configuration + // The client is created without a timeout - each operation will set its own timeout + // If TLS is enabled but misconfigured, fail fast to alert the operator immediately + // rather than silently falling back to HTTP and causing confusing runtime errors + httpClient, err := client.NewHttpClient(client.Client) + if err != nil { + glog.Fatalf("Failed to create HTTPS client for file browser: %v", err) + } + return &FileBrowserHandlers{ adminServer: adminServer, + httpClient: httpClient, + } +} + +// newClientWithTimeout creates a temporary http.Client with the specified timeout, +// reusing the TLS transport from the shared httpClient. +func (h *FileBrowserHandlers) newClientWithTimeout(timeout time.Duration) http.Client { + return http.Client{ + Transport: h.httpClient.Client.Transport, + Timeout: timeout, } } @@ -345,8 +367,15 @@ func (h *FileBrowserHandlers) uploadFileToFiler(filePath string, fileHeader *mul return fmt.Errorf("failed to close multipart writer: %w", err) } - // Create the upload URL with validated components - uploadURL := fmt.Sprintf("http://%s%s", filerAddress, cleanFilePath) + // Create the upload URL - the httpClient will normalize to the correct scheme (http/https) + // based on the https.client configuration in security.toml + uploadURL := fmt.Sprintf("%s%s", filerAddress, cleanFilePath) + + // Normalize the URL scheme based on TLS configuration + uploadURL, err = h.httpClient.NormalizeHttpScheme(uploadURL) + if err != nil { + return fmt.Errorf("failed to normalize URL scheme: %w", err) + } // Create HTTP request req, err := http.NewRequest("POST", uploadURL, &body) @@ -357,11 +386,11 @@ func (h *FileBrowserHandlers) uploadFileToFiler(filePath string, fileHeader *mul // Set content type with boundary req.Header.Set("Content-Type", writer.FormDataContentType()) - // Send request - client := &http.Client{Timeout: 60 * time.Second} // Increased timeout for larger files + // Send request using TLS-aware HTTP client with 60s timeout for large file uploads // lgtm[go/ssrf] // Safe: filerAddress validated by validateFilerAddress() to match configured filer // Safe: cleanFilePath validated and cleaned by validateAndCleanFilePath() to prevent path traversal + client := h.newClientWithTimeout(60 * time.Second) resp, err := client.Do(req) if err != nil { return fmt.Errorf("failed to upload file: %w", err) @@ -444,7 +473,57 @@ func (h *FileBrowserHandlers) validateAndCleanFilePath(filePath string) (string, return cleanPath, nil } -// DownloadFile handles file download requests +// fetchFileContent fetches file content from the filer and returns the content or an error. +func (h *FileBrowserHandlers) fetchFileContent(filePath string, timeout time.Duration) (string, error) { + filerAddress := h.adminServer.GetFilerAddress() + if filerAddress == "" { + return "", fmt.Errorf("filer address not configured") + } + + if err := h.validateFilerAddress(filerAddress); err != nil { + return "", fmt.Errorf("invalid filer address configuration: %w", err) + } + + cleanFilePath, err := h.validateAndCleanFilePath(filePath) + if err != nil { + return "", err + } + + // Create the file URL with proper scheme based on TLS configuration + fileURL := fmt.Sprintf("%s%s", filerAddress, cleanFilePath) + fileURL, err = h.httpClient.NormalizeHttpScheme(fileURL) + if err != nil { + return "", fmt.Errorf("failed to construct file URL: %w", err) + } + + // lgtm[go/ssrf] + // Safe: filerAddress validated by validateFilerAddress() to match configured filer + // Safe: cleanFilePath validated and cleaned by validateAndCleanFilePath() to prevent path traversal + client := h.newClientWithTimeout(timeout) + resp, err := client.Get(fileURL) + if err != nil { + return "", fmt.Errorf("failed to fetch file from filer: %w", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, err := io.ReadAll(resp.Body) + if err != nil { + return "", fmt.Errorf("filer returned status %d but failed to read response body: %w", resp.StatusCode, err) + } + return "", fmt.Errorf("filer returned status %d: %s", resp.StatusCode, string(body)) + } + + contentBytes, err := io.ReadAll(resp.Body) + if err != nil { + return "", fmt.Errorf("failed to read file content: %w", err) + } + + return string(contentBytes), nil +} + +// DownloadFile handles file download requests by proxying through the Admin UI server +// This ensures mTLS works correctly since the Admin UI server has the client certificates func (h *FileBrowserHandlers) DownloadFile(c *gin.Context) { filePath := c.Query("path") if filePath == "" { @@ -459,6 +538,12 @@ func (h *FileBrowserHandlers) DownloadFile(c *gin.Context) { return } + // Validate filer address to prevent SSRF + if err := h.validateFilerAddress(filerAddress); err != nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "Invalid filer address configuration"}) + return + } + // Validate and sanitize the file path cleanFilePath, err := h.validateAndCleanFilePath(filePath) if err != nil { @@ -466,16 +551,66 @@ func (h *FileBrowserHandlers) DownloadFile(c *gin.Context) { return } - // Create the download URL - downloadURL := fmt.Sprintf("http://%s%s", filerAddress, cleanFilePath) + // Create the download URL with proper scheme based on TLS configuration + downloadURL := fmt.Sprintf("%s%s", filerAddress, cleanFilePath) + downloadURL, err = h.httpClient.NormalizeHttpScheme(downloadURL) + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to construct download URL: " + err.Error()}) + return + } + + // Proxy the download through the Admin UI server to support mTLS + // lgtm[go/ssrf] + // Safe: filerAddress validated by validateFilerAddress() to match configured filer + // Safe: cleanFilePath validated and cleaned by validateAndCleanFilePath() to prevent path traversal + // Use request context so download is cancelled when client disconnects + req, err := http.NewRequestWithContext(c.Request.Context(), "GET", downloadURL, nil) + if err != nil { + c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to create request: " + err.Error()}) + return + } + client := h.newClientWithTimeout(5 * time.Minute) // Longer timeout for large file downloads + resp, err := client.Do(req) + if err != nil { + c.JSON(http.StatusBadGateway, gin.H{"error": "Failed to fetch file from filer: " + err.Error()}) + return + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, err := io.ReadAll(resp.Body) + if err != nil { + c.JSON(resp.StatusCode, gin.H{"error": fmt.Sprintf("Filer returned status %d but failed to read response body: %v", resp.StatusCode, err)}) + return + } + c.JSON(resp.StatusCode, gin.H{"error": fmt.Sprintf("Filer returned status %d: %s", resp.StatusCode, string(body))}) + return + } // Set headers for file download fileName := filepath.Base(cleanFilePath) - c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", fileName)) - c.Header("Content-Type", "application/octet-stream") + // Use mime.FormatMediaType for RFC 6266 compliant Content-Disposition, + // properly handling non-ASCII characters and special characters + c.Header("Content-Disposition", mime.FormatMediaType("attachment", map[string]string{"filename": fileName})) - // Proxy the request to filer - c.Redirect(http.StatusFound, downloadURL) + // Use content type from filer response, or default to octet-stream + contentType := resp.Header.Get("Content-Type") + if contentType == "" { + contentType = "application/octet-stream" + } + c.Header("Content-Type", contentType) + + // Set content length if available + if resp.ContentLength > 0 { + c.Header("Content-Length", fmt.Sprintf("%d", resp.ContentLength)) + } + + // Stream the response body to the client + c.Status(http.StatusOK) + _, err = io.Copy(c.Writer, resp.Body) + if err != nil { + glog.Errorf("Error streaming file download: %v", err) + } } // ViewFile handles file viewing requests (for text files, images, etc.) @@ -559,46 +694,13 @@ func (h *FileBrowserHandlers) ViewFile(c *gin.Context) { viewable = false reason = "File too large for viewing (>1MB)" } else { - // Get file content from filer - filerAddress := h.adminServer.GetFilerAddress() - if filerAddress != "" { - // Validate filer address to prevent SSRF - if err := h.validateFilerAddress(filerAddress); err != nil { - viewable = false - reason = "Invalid filer address configuration" - } else { - cleanFilePath, err := h.validateAndCleanFilePath(filePath) - if err == nil { - fileURL := fmt.Sprintf("http://%s%s", filerAddress, cleanFilePath) - - client := &http.Client{Timeout: 30 * time.Second} - // lgtm[go/ssrf] - // Safe: filerAddress validated by validateFilerAddress() to match configured filer - // Safe: cleanFilePath validated and cleaned by validateAndCleanFilePath() to prevent path traversal - resp, err := client.Get(fileURL) - if err == nil && resp.StatusCode == http.StatusOK { - defer resp.Body.Close() - contentBytes, err := io.ReadAll(resp.Body) - if err == nil { - content = string(contentBytes) - viewable = true - } else { - viewable = false - reason = "Failed to read file content" - } - } else { - viewable = false - reason = "Failed to fetch file from filer" - } - } else { - viewable = false - reason = "Invalid file path" - } - } - } else { - viewable = false - reason = "Filer address not configured" + // Fetch file content from filer + var err error + content, err = h.fetchFileContent(filePath, 30*time.Second) + if err != nil { + reason = err.Error() } + viewable = (err == nil) } } else { // Not a text file, but might be viewable as image or PDF @@ -893,18 +995,28 @@ func (h *FileBrowserHandlers) isLikelyTextFile(filePath string, maxCheckSize int return false } - fileURL := fmt.Sprintf("http://%s%s", filerAddress, cleanFilePath) + // Create the file URL with proper scheme based on TLS configuration + fileURL := fmt.Sprintf("%s%s", filerAddress, cleanFilePath) + fileURL, err = h.httpClient.NormalizeHttpScheme(fileURL) + if err != nil { + glog.Errorf("Failed to normalize URL scheme: %v", err) + return false + } - client := &http.Client{Timeout: 10 * time.Second} // lgtm[go/ssrf] // Safe: filerAddress validated by validateFilerAddress() to match configured filer // Safe: cleanFilePath validated and cleaned by validateAndCleanFilePath() to prevent path traversal + client := h.newClientWithTimeout(10 * time.Second) resp, err := client.Get(fileURL) - if err != nil || resp.StatusCode != http.StatusOK { + if err != nil { return false } defer resp.Body.Close() + if resp.StatusCode != http.StatusOK { + return false + } + // Read first few bytes to check if it's text buffer := make([]byte, min(maxCheckSize, 512)) n, err := resp.Body.Read(buffer) |
