Set A Timeout Value In cURL

Today we’re going to discuss a topic you probably won’t ever need but is worth knowing, we’re going to set a timeout value in cURL. We’re telling the cURL application to quit trying if it takes too long. I suppose it is something worth knowing, so we might as well learn it.

How often do you need this? Well, that depends on you and your workflow. Me? Well, let’s just say that it’s in my notes. I’m not sure that I’ve ever actually used it productively, but it is in my notes. Now? Well, now it’s in your notes! Or, at least it’s here and searchable should you ever actually need to set a timeout value in cURL.

So, what is cURL? I’ve written about it before (some links to follow) but it’s a tool to transfer a URL. That’s exactly what the man page says. Specifically, it says:

If you want to see what the HTML looks like for this site, you can run this:

(That’s not particularly helpful, but you can do it.)

I mentioned that I’d written about cURL before and it may be of some benefit to read these articles (or at least skim them) if you’re unfamiliar with the cURL application.

Let’s Have a Limited Look at Linux’s cURL Application
How To: Make ‘curl’ Ignore Certificate Errors
How To: Add A New Line With CURL

You can see a couple of useful applications of cURL:

Weather In The Terminal? We can do that!
How To: Find Your IP Address Through Your Terminal

See? So, cURL has some use – even for a regular desktop user. If any of those things take too long, you can set a timeout value for cURL, which is what this article is all about.

Set A Timeout Value In cURL:

cURL is a terminal-based tool. Sure, some GUI applications use it in the background, but it’s a terminal tool. As such, you are going to need a terminal available. You should be able to press CTRL + ALT + T to access a terminal. If not, open one from your application menu.

With your terminal open, the syntax for setting one of the timeout values in cURL is pretty basic and easy to understand. Try this:

The time_limit value is in seconds. If you wanted to load the content of this site’s home page and set a timeout value of 10 seconds, you’d run this command:

(Again, not very useful.)

But, that timeout value is just for time-to-first-byte. So, the server will need to respond within 10 seconds else the cURL process will shut down.

There’s another timeout value for cURL. You can set the overall time limit, that is the entire process (including transferring of data) must be completed within that timeframe. If it isn’t, the cURL process will shut itself down. The syntax for that time of timeout value would be like so:

So, if you wanted to make sure the entire transfer of data was done in under 60 seconds, your command would look like this:

(Again, not very useful – but it should certainly take less than 60 seconds!)

I suppose you might find some of this useful if you’re cURLing files more weighty than a web page. You can cURL actual files and write that data to your terminal’s standard output. That’s what cURL does, after all. So, you might find a use for this command.

Closure:

Well, this wasn’t a very long article. It doesn’t cover a great deal and probably won’t be useful to 99 out of 100 people. That’s okay. Not all of my articles are meant for the 99% and sometimes you just gotta write what you feel like writing. This is what I felt like writing. It probably won’t do well for search engine results and that’s okay. Someday, somebody will want this information, type it into Google, and find this site. Or another one just like it, I suppose…

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Make ‘wget’ Resume From An Interrupted Download

Today’s just going to be a quick article, an article where we learn how to make ‘wget’ resume from an interrupted download. This is a darned useful function you can add to a wget command, especially if you’re in an area with sketchy connectivity. To learn to make wget resume from an interrupted download, read on!

So many of my articles are written because of something I did recently. Many are still based on my copious notes (we’re well over 300 articles here on Linux Tips), but those will run out eventually. I’m often thinking of new ideas for articles and sometimes my day-to-day computing gives me an article idea that’s not from my notes. This is one of those…

Today, we’re going to cover yet another wget feature! We’ve had many wget articles. Here are a few of them:

Limit The Download Speed For ‘wget’
Rename A File Downloaded With ‘wget’
How To: Hide The Output From wget

And we’ve used wget in many articles. Go search for “wget”.

By now, many of my regular readers will be more than familiar with wget. So, what is wget? It’s a terminal-based tool that you use to download files. I use it often. You’re encouraged to check man wget for more information.

I use it outside of the browser, even if I found the download link via a browser. It’s just that handy and the throughput rate seems to be greater with wget (oftentimes). If you check the man page, wget describes itself as:

The non-interactive network downloader.

Which is exactly what it does. Which is nice…

How To: Make ‘wget’ Resume From An Interrupted Download:

You’ll kinda sorta maybe need an open terminal for this article. If you don’t know how to open the terminal, you can do so with your keyboard. Press CTRL + ALT + T and your default terminal should pop open.

With your terminal open, you just need a file to download… I’ll let you pick that. You also need to interrupt your download, so that you can practice this…

Wait, no… That’s just silly. Instead of practicing this, just learn from my usage and call it good. There’s no need to replicate this until you need it. Yeah, that’s the ticket!

So, imagine my surprise when I learned that Gentoo now has a live USB edition. (I was pretty surprised.) I immediately decided to download the file, though I’ve still not tried it. To download the .iso, I used the fantastic wget tool.

My terminal was already open. My present working directory was already the ‘Downloads’ directory. I had nothing to do except enter the wget command. The command I entered was this:

As you may know, my DSL provider made me angry and I’m now using a combination of a mobile hot spot and satellite. My mobile provider likes to disconnect me for 30 minutes at a time and does so at varied intervals.

I normally just switch to the satellite connection for 30 minutes but wget didn’t like that. I’d already downloaded half of the file while tethered to my phone and didn’t want to download it again. Downloading data I’d previously downloaded is just a pain in the butt and slows things down. So, I added the -c flag. The command I then used, once connectivity was restored, was this:

Sure enough, wget resumed from where it left off when the connection dropped out. I didn’t have to download that all over again. Sure, wget will automatically retry a few times (which you can modify) but it’s not going to keep trying for 30 minutes (by default) or longer. So, this is how you make wget resume from an interrupted download.

Closure:

See, it’s easy to make wget resume from an interrupted download. Was it worth writing an entire article for a single flag? I’d say yes. Well, of course, I would say yes. If I didn’t think it was worth an entire article, I wouldn’t have written an entire article about it!

Ah well…

And now you know…

You’re welcome…

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Linux Tips
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.