Visual Studio .NET - How to download large files in ASP.NET

Asked By Rajesh Kumar on 16-Jun-08 10:05 AM

Hi, can anyone give me the code to download large files ( more than 700 MB) in asp.net. I am using Response.Write which is not working.

try it - Deepak Ghule replied to Rajesh Kumar on 16-Jun-08 10:15 AM

'Download the selected file

Response.Buffer = False
Server.ScriptTimeout = 100
Dim FullPath As String = Server.MapPath(currentPath + StrFileName)
Dim DownloadFileInfo As New FileInfo(FullPath)
Dim Name = DownloadFileInfo.Name
Dim Ext = DownloadFileInfo.Extension
Dim stream = New System.IO.FileStream(FullPath, System.IO.FileMode.Open,
System.IO.FileAccess.Read, System.IO.FileShare.Read)
Dim StrFileType As String = ""
If Not IsDBNull(Ext) Then
Ext = LCase(Ext)
End If
Select Case Ext
Case ".exe"
'Exe file
StrFileType = "application/octet-stream"
Case ".zip"
'Zip file
StrFileType = "application/x-zip-compressed"
Case ".pdf"
'Pdf file
StrFileType = "application/pdf"
Case ".doc"
'MS Word
StrFileType = "Application/msword"
Case ".dll"
'Dll file
StrFileType = "application/x-msdownload"
Case ".html", ".htm"
'Html file
StrFileType = "text/HTML"
Case ".txt"
'Txt file
StrFileType = "text/plain"
Case ".jpeg", ".jpg"
'Jpg picture
StrFileType = "image/JPEG"
Case Else
StrFileType = "application/octet-stream"
End Select
'Clear the headers
Response.ClearHeaders()
Response.ClearContent()
Response.Clear()
'Add the download headers
Response.ContentType = "application/octet-stream"
Response.AddHeader("Content-Disposition", "attachment; filename=" + Name)
If StrFileType <> "" Then
Response.ContentType = StrFileType
End If
Response.AddHeader("Content-Length", DownloadFileInfo.Length)
Dim buffer(10000) As Byte
Dim length As Long
'Total bytes to read:
Dim bytesToRead As Long = stream.Length
Dim UserHasDownload As Boolean = False
Try
'Read the bytes from the stream in small portions.
While (bytesToRead > 0)
'Make sure the client is still connected.
If (Response.IsClientConnected) Then
'Read the data into the buffer and write into the output stream.
length = Int(stream.Read(buffer, 0, 10000))
Response.OutputStream.Write(buffer, 0, length)
Response.Flush()
'We have already read some bytes.. need to read only the remaining.
bytesToRead = bytesToRead - length
UserHasDownload = True
Else
'Get out of the loop, if user is not connected anymore..
bytesToRead = -1
UserHasDownload = False
End If
End While
Catch
'An error occurred..
Finally
End Try
stream.Close()

change some line in web.configue file - Deepak Ghule replied to Rajesh Kumar on 16-Jun-08 10:17 AM

set attributes  in the httpRuntime property of the web.config file:

 

executionTimeout="21600" maxRequestLength="2097151"

U can download file by 4 ways - Deepak Ghule replied to Rajesh Kumar on 16-Jun-08 10:21 AM

 U can download file by 4 ways

 

Allow direct access to the file

Using Response.WriteFile

Streaming the file using Response.BinaryWrite

Use an ISAPI Filter


This is good link it explain very well that "Optimizing the Downloading of Large Files in ASP.NET "

refer it

http://www.objectsharp.com/cs/blogs/bruce/articles/1571.aspx

 

check this.. - Santhosh N replied to Rajesh Kumar on 16-Jun-08 11:26 AM
You can do that using http headers more efficiently..
check this article which explains pin point to what the issue you face and how to do that even with the capability of resuming the downloads..
http://www.devx.com/dotnet/Article/22533/1954?pf=true
reply - alice johnson replied to Rajesh Kumar on 16-Jun-08 12:23 PM
 if (objResponse.IsClientConnected) {intLengthOfReadChunk = objStream.Read(buffer, 0, Math.Min(buffer.Length, bytesToRead));

objResponse.OutputStream.Write(buffer, 0, intLengthOfReadChunk);


objResponse.Flush();

Array.Clear(buffer, 0, bufferSize);

bytesToRead -= intLengthOfReadChunk;

}
else {

// DOWNLOAD INTERUPTED

bytesToRead = -1;

wasDownloadInterupted =
true;

}

To  change this to use

Response.TransmitFile(path to file);

download large files in ASP.NET - Swapnil Salunke replied to Rajesh Kumar on 16-Jun-08 03:00 PM
Hello Rajesh

There are basically following types to download the large file from the server

·         Allow direct access to the file

·         Using Response.WriteFile

·         Streaming the file using Response.BinaryWrite

·         Use an ISAPI Filter

Using Response.write
 

<%@ Page language="c#" AutoEventWireup="false" %>

<html>

   <body>

        <%

           if (Request.QueryString["File"] != null)

                Response.WriteFile (Request.QueryString["File"]);

        %>

   </body>

</html>


Using Stream

using( Stream s = new FileStream( fileName, FileMode.Open,

FileAccess.Read, FileShare.Read, bufferSize ) )

{

byte[] buffer = new byte[bufferSize];

      int count = 0;

      int offset = 0;

      while( (count = s.Read( buffer, offset, buffer.Length ) ) > 0 )

      {

            ctx.Response.OutputStream.Write( buffer, offset, count );

      }

}
This Link can describe all
http://www.objectsharp.com/cs/blogs/bruce/articles/1571.aspx

This can also be used
procedure DownloadFile(const FilePath : String; const FileName : String =''; const ContentType : String = '') ;
   type
     TStringArray = array of string;
   var
     DownloadFileName : string;
     fi : FileInfo;
     StartPos, FileSize, EndPos : System.Int64;
     Range : string;
     StartEnd : TStringArray;
   begin
     If Not System.IO.File.Exists(FilePath) Then Exit;

     StartPos := 0;

     fi := FileInfo.Create(FilePath) ;
     FileSize := fi.Length;
     EndPos := FileSize;

     HttpContext.Current.Response.Clear() ;
     HttpContext.Current.Response.ClearHeaders() ;
     HttpContext.Current.Response.ClearContent() ;

     Range := HttpContext.Current.Request.Headers['Range'];
     If Assigned(Range) AND (Range <> '') Then
     Begin
       StartEnd := Range.Substring(Range.LastIndexOf('=') + 1).Split(['-']) ;
       If Not (StartEnd[0] = '') Then
         StartPos := Convert.ToInt64(StartEnd[0]) ;
     End;

     If (System.Array(StartEnd).GetUpperBound(0) >= 1) And (Not (StartEnd[1] = '')) Then
       EndPos := Convert.ToInt64(StartEnd[0])
     Else
       EndPos := FileSize - StartPos;

     If EndPos > FileSize Then EndPos := FileSize - StartPos;

     HttpContext.Current.Response.StatusCode := 206;
     HttpContext.Current.Response.StatusDescription := 'Partial Content';
     HttpContext.Current.Response.AppendHeader('Content-Range', 'bytes ' + StartPos.ToString + '-' + EndPos.ToString + '/' + FileSize.ToString) ;

   If Not (ContentType = '') And (StartPos = 0) Then
   Begin
     HttpContext.Current.Response.ContentType := ContentType;
   End;

   If FileName = '' Then
     DownloadFileName := fi.Name
   else
     DownloadFileName := FileName;

   HttpContext.Current.Response.AppendHeader('Content-disposition', 'attachment; filename=' + DownloadFileName) ;
   HttpContext.Current.Response.WriteFile(FilePath, StartPos, EndPos) ;
   HttpContext.Current.Response.&End;
End;

Hope this helps you
http://delphi.about.com/cs/adptips2004/a/bltip0504_2.htm
http://www.west-wind.com/weblog/posts/76293.aspx
http://dotnetslackers.com/Community/blogs/haissam/archive/2007/04/03/Downloading-Files-C_2300_.aspx

Happy Coding takecare

Check this to download large files - Sagar P replied to Rajesh Kumar on 17-Jun-08 01:39 AM

There are many ways to do this;

  1. Using Response.WriteFile u can easily download a large file
  2. Using streame: i.e. by using Response.BinaryWrite
  3. Allow direct access to that file

  4. Using an ISAPI filter

Direct Access

The most obvious approach to delivering a file across the Internet is to place it in a directory accessible by a web server.  Then anyone can use a browser to retrieve it.  As easy as that sounds, there are a number of problems that make this alternative unworkable for all but the simplest of applications.  What if, for example, you don’t want to make the file available to everyone?  While adding NTFS-based security would protect the file from unwanted accessed, there is the administrative hassle of creating a local machine account for every user. 

More flexible access control mechanisms run into similar problems.  If you are using authentication schemes for which SQL Server or Oracle provide the data store, direct access is not going to be effective.  There is no easy way to validate the credentials of an incoming request made through a browser against a database (without getting ASP.NET involved, that is). When all of these factors are taken into consideration, it becomes clear that direct access only works in very limited situations.

Response.WriteFile

Since direct access isn’t realistic, ASP.NET must be a part of the solution. Using the WriteFile method on the Response object is the simplest way to programmatically send a file to the client.  The technique is quite simple.  Create an .ASPX page or other HttpHandler to process the request.  As part of the processing for the request, determine which file to download and use the Response.WriteFile method to send it to the client.  The following is a simple .ASPX page that demonstrates this.

<%@ Page language="c#" AutoEventWireup="false" %>

<%@ Page language="c#" AutoEventWireup="false" %>

<html>

   <body>

        <%

           if (Request.QueryString["File"] != null)

                Response.WriteFile (Request.QueryString["File"]);

        %>

   </body>

</html>

One of the benefits of using Response.WriteFile is that the security for the request is much more extensible.  The incoming requests are processed through the normal Internet Information Server (IIS) pipeline, which mean that IIS authentication can be applied to the request. And all of the events necessary to plug in your own custom authentication are available.

Aspnet_wp.exe

Request

 

aspnet_isapi.dll

AppDomain

Response

The Document

Named pipes

Inetinfo.exe


So what is the downside to using WriteFile? It does not work well when large files are involved.  To understand why, a brief description of IIS’s architecture helps.

When a request arrives at the web server, Inetinfo.exe determines how it should be processed.  For .aspx requests, the aspnet_isapi.dll handler is used.  Aspnet_isapi.dll in turn communicates the request to the worker process (aspnet_wp.exe).  The worker process contains one or more application domains (one per virtual directory, typically). The web site actually runs as an assembly loaded into the appropriate AppDomain within the worker process.  It is this assembly that ultimately handles to the request, compiling and transmitting the response as necessary. 

Going into a little more detail, aspnet_isapi.dll is a Win32 (i.e not managed code) DLL.  The purpose of aspnet_isapi.dll is threefold.  First, it is responsible for routing the incoming request to the worker process.  Second, it monitors the health of worker process, killing aspnet_wp off if performance falls below a specified threshold.  Finally, aspnet_isapi.dll is responsible for starting the worker process before passing along the first request after IIS has been reset.  It is the first of these three tasks that is of interest to us.

The routing of the request as performed by aspnet_isapi.dll requires that a communication mechanism be established between it and the worker process. This is accomplished through a set of named pipes. A named pipe is a mechanism that, not surprisingly, works like a pipe.  Data is pushed into one end of the pipe and retrieved from the other.  For local, interprocess communications, pipes are the most efficient available technique.

Given this information, the flow of each .aspx request is: InetInfo.exe to aspnet_isapi.dll through a named pipe to the worker process.  Once the request has been evaluated and a response formulated, that information is pushed back across the named pipe to aspnet_isapi.dll.  Then back through Inetinfo.exe to the requestor.

If you put this architecture into the context of processing requests for large files, you can see why there might be a problem with performance.  As the file is moved back through to pipe to aspnet_isapi.dll, it gets placed into memory.  Once the entire file has been piped, the file is then transmitted to the requestor.  Empirical evidence suggests almost a one-to-one growth in the memory consumed by inetinfo.exe (as shown by perfmon) and the size of the file being retrieved.  Figure 2 contains the perfmon output for inetinfo.exe as two separate requests to retrieve a file 23MB in size is processed using Response.WriteFile.

 

Although there is no way to illustrate it here, this memory growth cannot be avoided.  Even when buffering is turned off at every step along the way, starting from the ASP.NET page level and moving out to Response.Buffer = false.  Sure the growth is temporary, but think of how much fun your server will have processing 3 or 4 simultaneous requests for 50 MB files.  Or 30 or 40 requests.  You get the picture. 

Response.BinaryWrite and Response.OutputStream.Write

Given the memory growth that is the symptom of using the WriteFile method, the next logical step is to try to break the outgoing file into pieces.  After all, if inetinfo.exe is placing the entire response into memory, then giving it smaller pieces should minimize the impact of transmitting a large file. Before the next piece comes in, the previous piece is sent on to the client, keeping the overall memory usage down. Fortunately, this is not a challenging problem, as the following code demonstrates.

using( Stream s = new FileStream( fileName, FileMode.Open,

FileAccess.Read, FileShare.Read, bufferSize ) )

{

byte[] buffer = new byte[bufferSize];

      int count = 0;

      int offset = 0;

      while( (count = s.Read( buffer, offset, buffer.Length ) ) > 0 )

      {

            ctx.Response.OutputStream.Write( buffer, offset, count );

      }

}

The code that adds the appropriate headers to the response have been left off for conciseness.

One of the benefits of this approach is that there is a lot more control that the developer can exert over the download process.  Want to change the size of the buffer?  No problem.  Want to put a short pause between chunks in an attempt to give inetinfo.exe a chance to reclaim some memory?  No sweat.  Unfortunately, all of these scenarios are useless when it comes to large files.  Regardless of the how the transmitted files are broken up or at which level buffering is enabled or disabled, large files end up becoming a large memory sink for the ASP.NET process.

ISAPI Filters

Given what has been discussed so far, it seems apparent that the problem with returning large file seems to be rooted in inetinfo.exe.  More accurately, it seems to be found in the area surrounding where the named pipes are used to communicate between the aspnet_isapi.dll and the worker process.  After all, when aspnet_isapi.dll isn’t involved (such as in the first scenario), there is no problem. For ASP.NET requests, large file transfers mean large memory consumption.  So what can be done to reduce the amount of data that moves through the named pipe?  What would be nice is if you could combine the speed of direct access with the authentication and authorization capabilities offered by .ASPX pages. Luckily, that combination is within our power to deliver.

The purpose of an ISAPI filter is suggested by its name.  It is a DLL that sits between the requester and the web service.  With an ISAPI filter, it is possible to intercept both ingoing and outgoing messages.  As part of the interception process, the messages going in either direction can be modified.  As well, because the filter is not an endpoint for a request, there is no need for the client to be aware that the filter is even being used.  The term for this type of functionality is orthogonal, a fact that I mention only because it’s my favorite word.

So let’s consider what the purpose of this ISAPI filter is in the context of our dilemma.  In the .ASPX page, we will take the name of the requested file and perform the necessary authentication and authorization. Then, as part of the process, the path to the requested file is placed into the headers that are part of the response.  The response is then directed back towards the requestor. This is where the ISAPI filter kicks in.

The ISAPI filter in question interposes itself between the worker process and the requestor.  It examines the outgoing message looking for a special header…one that contains a path to a file.  When that header is detected, it extracts the file path, removes the header from the response and adds the contents of the file path to the response.

From the client side, the response now looks exactly like what is expected.  From the server side, the request was authenticated and authorized properly.  From the performance side, the file was streamed into the response as part of the inetinfo.exe process.  Most importantly, it didn’t come through the named pipe that is used to communicate with the worker process.  And the problem with the momentary memory growth goes away.

 

GO thr this link for all this details;

http://www.objectsharp.com/cs/blogs/bruce/articles/1571.aspx

 

Also see this;

http://www.devx.com/dotnet/Article/22533/1954?pf=true

 

http://delphi.about.com/cs/adptips2004/a/bltip0504_2.htm

 

Best Luck!!!!!!!!!!!!!
Sujit.