Meta: The State Of Linux-Tips #19

Today we’re just going to have a look at some of what’s going on here at Linux-Tips, with an article about the state of Linux-Tips. It’s a regular thing that I try to do. I don’t always remember and I don’t always have anything new to share. So, as you can see, there are fewer meta articles than there are months the site has existed.

I almost didn’t write one this month but fate decided to be a cruel mistress…

See, I’m not always ahead of the curve. A lot of the time, I write an article the night before it is due to be published. I often have a spare article that can fill in if I am somehow prevented from writing that article. Alas, I do not have such an article, though I did consider updating an older article.

But, I mentioned fate. Fate is a fickle mistress and it was fate that decided my internet connection would barely work – when it did work. So, rather than take my sleep meds, I stayed awake late just so that I could write this article.

I had another article planned, but it requires some research and screenshots. Those have to be uploaded. My current connection speed isn’t all that dissimilar to dial-up rates. There will be no uploading of images tonight.

Sure, you might think that I’d learn my lesson and always have a spare article, but that’s just not going to happen. Nobody has been willing to write any articles lately, so that means I must write all of them myself. To do so with this consistency is, frankly, amazing. I dare say that I’m unique in these regards, especially with this time frame.

So, you get a meta article…

It takes longer to write these than it takes to write a regular article, but it’s less bandwidth than the article I had planned. I’m going to take my sleep meds and hope this is finished in time.

The State Of Linux-Tips:

This is a good thing…

During those moments of interruption, where I couldn’t even load the full page (it’s huge) to write the article, I was seeing 12 to 15 people online at the same time. That’s insane. That’s well and truly insane.

I used to be amazed if I had 20 visitors in a day. This month, I’m averaging almost 700 visits per day. I am so grateful for the opportunity to share my writing with that many people. Yes, those numbers pale against the big sites, but they’re huge to me.

So, not much has changed since the last meta article. The same browsers are performing the same as they did last month. The same web pages are proportionally the same as they are this month. To save me some time, why not read the previous article:

Meta: The State Of Linux-Tips #18

The big news was in the last meta article:

Meta: Getting Indexed In Bing

I am so excited about being indexed in Bing! However, it means pretty much nothing in terms of the total number of visitors. They say that Bing has 3% of the search market, but that’s not what I’m seeing.

I realize Bing just started sending me traffic, but they’ve sent 85 visitors out of over 20,000 visitors. I mean, that’s great and all, but they don’t amount to much. Today has been a good day with over 1000 visitors. Bing’s traffic is a tiny amount – but I’m still so grateful and so excited.

More:

You know, at the top of each article is a tool to help you share the articles with various link services and social media services. I should probably remove it as it’s wasted bandwidth. As near as I can tell, it has never been used – and it has been there since the very first day of the site’s existence.

Ad revenue doesn’t add up to a whole lot. We will chew through 35 GB worth of CDN traffic this month. I do get a donation now and then, and I appreciate it greatly. It goes straight into the costs of running this site. As I’ve said many times, regardless of the financial aspect, the site will remain running – until I either kick the bucket or run out of stuff to say.

I have done some SEO stuff. The site has a DR of 30 on AHREFS. SEMRush seems to also like the site. It’s interesting to pay attention to that stuff, but I have no idea what I’m doing. SEO is beyond my ability. Heck, I’m not even a qualified admin!

I did apply to be an affiliate of a service I love, but they not only turned us down they refused to send an email explaining why. I was pretty disappointed in the company, but I still use their product. Frankly, it’s the best in the industry. They approve sites with less traffic and they approve sites with far more controversial topics. Ah well…

So, yeah, not a heck of a lot has changed. Copy and paste the results from last month, add more traffic, add more articles, add more words, and you’ve got the same thing going on this month.

And that’s okay. It’s not explosive growth, but it’s consistent growth. As a businessperson, I’ve long since learned to appreciate consistent growth over bursts that can be inconsistent and harmful. The site’s in a good place right now and let’s hope it continues to grow.

Closure:

Again, I consider it quite an honor to get this much traffic. The list of things I do not know could fill a book, but I share what I do know. I don’t think we’ll ever suffer from a lack of article ideas. If we do, we can just repackage the old articles and pretend they’re new – just like all the other sites do!

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Locate Your Home Directory

Today’s article will not be long, nor will it be a complicated article, as we learn how to locate your home directory. This is something you probably already know, but you may encounter a strange system where things are a little different. It can happen.

More importantly, this article is going to try something new. Rather than a very long article, it’s going to be a short article. Well, shorter than most – assuming I stop puffing it up with text such as this. Why? Well, I want to see the reception and the statistics.

So, basically, every user account you’re likely to use should have a home directory. This is where the user’s files, customizations, and settings reside. Not every user has a home directory, but the accounts you’d normally use (that is log into and operate) will likely have such a directory. However, you don’t have to have a home directory – though you can expect some weirdness without one.

How To: Create a New User Without a /home Directory

The usual home directory will be in /home/<user> and that’s pretty much the standard we’ve come to know and love. You really shouldn’t need to locate your home directory, but there comes a time when you just might want to.

So, that’s what this article covers. It covers how to…

Locate Your Home Directory:

So, we’ll be doing this in the terminal. That’s a nice place to do things. Press CTRL + ALT + T and your default terminal should open. If it doesn’t open, pick a better distro!

Nah, just open it from your application menu and love the distro you’ve chosen.

We’ll just be covering a couple of quick ways to locate your home directory. That’s all this article is and there’s no reason to turn it into a longer article. We’ll do them both with the echo command. See man echo for more details.

First, you can try this command:

As you should know by now, the tilde (~) is a shortcut for your home directory and it works just fine in this use case.

There’s another echo command you can memorize, but slightly longer:

That command uses both the echo command and an environment variable, specifically $HOME (obviously). This will happily echo the results, sending them to your standard output. This can be quite useful if you’re into scripting or the like.

See also:

How To: Show All Environment Variables

I told you that this wouldn’t be long or complicated. You can now locate your home directory from the Linux terminal.

Closure:

As I said, I figured I’d try the opposite of what I’ve been trying lately. My most recent articles have been quite long and quite detailed. I like them. I enjoy writing them. This time around, it seemed like a good idea to try something different with a subject that benefits from brevity.

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Find And Remove Duplicate Files With fdupes

Today’s article has you cleaning up your storage space as you learn to find and remove duplicate files with fdupes. This isn’t something you need to do often and it’s something that could theoretically break your system. If you’re going to remove duplicate files, it’s a good idea to exercise some caution.

I’ve previously shared another way to remove duplicate files:

Find And Remove Duplicate Files With rdfind

I suppose it’s pretty obvious as to why one might want to remove duplicate files. You do so to keep your storage tidy and you do so to make space when space is limited. There are all sorts of ways to make free space and removing duplicate files is just one of them.

The tool we’ll be using this time around is known as ‘fdupes’ and the man page describes it like this:

fdupes – finds duplicate files in a given set of directories

It’s an easy enough application to install and this article shouldn’t be all that long. You’ll find that fdupes is available in your default repositories, or there’s a good chance that it is in there. Of course, this means it’s easy enough to install.

Installing fdupes:

You can install fdupes with a GUI and your software manager, but you can just as easily install it via the terminal. We’ll cover the latter, as it’s the most universal (and, I find, quickest) method. Of course, you’ll need an open terminal. In most cases you can just press CTRL + ALT + T and your default terminal should open.

With your terminal now open, let’s go ahead and install fdupes:

Debian/Ubuntu/derivatives:

SUSE/OpenSUSE/derivatives:

RHEL/Fedora/Rocky/derivatives:

Arch/Manjaro/derivatives:

Gentoo/Calculate/derivatives:

And More! (Just search your default repositories for ‘fdupes’ and it’ll almost certainly be there.)

As you can see, you’ll find that fdupes is available for pretty much every Linux system out there. Not only is it available, it’s already packaged for you and easy enough to install via the terminal. On top of that, it doesn’t take much space to install fdupes, a mere 110 kB or so.

NOTE: I do not have anything against GUI tools. While I have terminals open at all times, I do the majority of my computer interaction in a browser – specifically a GUI browser. I suggest and write about the terminal because it’s more universal. It’s also often faster, assuming you can type or at least cut and paste than it is to go mucking about with a GUI software installer.

Anyhow, it’s nice and easy to remove duplicate files with fdupes. This article is going to show you how – and the article shouldn’t even be that long! It’s pretty simple.

Remove Duplicate Files With fdupes:

I’m going to assume that you left your terminal open after installing fdupes. If you didn’t, you’ll need to open it again. The only way to run the fdupes application is in the terminal. So, even if you installed it with a GUI, it’s a CLI tool and you’ll need the command line to use it.

The basic syntax is pretty easy, and not entirely unlike rdfind. For example, if you want to find duplicate files, you simply run this command:

So, if you wanted to find duplicates in your home directory, you’d run this:

Don’t worry, you’re safe running that command. That won’t delete anything at all. That fdupes command will simply show you the duplicate files that it found.

If you want to run the fdupes command recursively, that is to check all the folders within the directory, you’d run the command like this:

If you want to calculate the size of the files that would be removed when removing the duplicates, the command is just this:

A summary is also available with this command:

You can also search multiple directories for duplicates. That’d be something like this:

Of course, you can run those commands together to get quite a bit of customization. They’re all reasonably harmless and will simply point out the duplicates as well as some meta information. You can then remove the duplicates by hand if you want.

You can also tell fdupes to remove the duplicates that it found. You’d never want to run this command without knowing what exactly is going to be removed, so don’t do that. Always check to make sure you’re not removing anything of value before automatically removing duplicates.

Fortunately, there’s a bit of a stopgap. You can run the following command and fdupes will ask for confirmation before removing the file:

If you want to go whole-hog and remove every duplicate found, and remove the files without any confirmation, you can run this command:

Of course, that’s just the basics. If you want to know more about fdupes, simply check the man page (man fdupes) for more information.

Closure:

Hmm… I think I need to clean or replace this keyboard. The colon key is sticking on me. It’s a bit of a pain in the butt.

Anyhow, if you’ve ever wanted to remove duplicate files with fdupes, you now have directions to do so. This being Linux, you have all sorts of options when it comes to removing duplicate files, though I again urge caution when doing so. If you were to run this on the root directory, you’d likely find a lot of duplicates, and removing them might break your system. So, be careful with these tools, as they’re pretty powerful.

I’m not yet out of ideas for articles but it’d be great if folks might suggest something they’d like to read about. You never know, it might be something I know about. As it is, I have to search this site before writing an article, or else I’d end up with even more duplicate articles. I don’t want that and you don’t want that. It’s not all that easy to keep up this pace, writing a new article every other day. I’ve managed so far, but I’m eventually going to miss a day or two. It’s going to happen.

As always…

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Meta: Getting Indexed In Bing

Today’s article will be largely a meta article, not about Linux but about getting indexed in Bing. For whatever reason, people sometimes have issues getting indexed in Bing and I was among those people.

To cut to the chase, I’ll simply say that this is no longer a problem for this site. We’re now indexed by Bing and the site shows up in their search results. In this article, I’ll tell you how. It may take me a few words to get to that point, but I’ll explain how we managed to get indexed in Bing.

So, first, let’s just say:

Welcome Bing Users!

As you may know, there’s a search engine called Bing. The search engine is owned by Microsoft and has been around since 2009. This isn’t Microsoft’s first search engine, but this incarnation has been around for a while now.

While we’re Linux users here, the Bing search engine has been known to give out some pretty solid results. Even still, Bing is far less popular than Google and they have about 3% of the web search market share.

Considering how their default browser (Edge, available for Linux) defaults to Bing as the search engine, it’s rather amazing how little people care about their search. I’ve used Bing extensively and found it gave me adequate results, especially when I used personalized search and searched from a logged-in account. Yes, they happily gave me Linux-related search results.

Bing also provides the search engine results for Yahoo. Yes, Yahoo still exists. Yes, people still use Yahoo. I am not one of those people, but I appreciate what Yahoo did for the web back in the day. So, while they’re surely a dinosaur, they’re still around and still useful for a few people.

The problem was, that I couldn’t get this site indexed by Bing. By that, Bing would happily crawl the site but they wouldn’t list me in their index. Even searching for the exact domain name showed zero results for this site. It’s a trivial amount of traffic, but I want everyone to be able to find this site and the information here.

Getting Indexed In Bing:

This site published the first article on March 6th, 2021. There is a new article published every other day, many of them are even acceptable! No matter what I did, Bing would not index Linux-Tips.

I did everything correctly. There’s no blackhat SEO, no overly weighted meta tags, no keyword stuffing, no hidden text, and no paying for links in link farms. All the links to this site (and there are thousands of them) are organic.

I even used IndexNow. I’ve used IndexNow from just about day one. That is a method of notifying Bing that new content has been published. They got this notification almost every other day. It does appear to lag sometimes, but that wouldn’t make any major difference. Bing knew the site existed and they refused to index the site. Not only that, Bing knew there was constant new content created and still refused to index the site.

I mostly ignored this. I signed up for Bing’s Webmaster Tools and authenticated the site on day one. They were even granted access to Google’s Search Console, meaning they had even more information about the site. Still, they refused to index the site.

This went on for years. Linux-Tips is more than two years old.

The Rest Of The Story:

Back in 2021, I found a form that I could fill out. I was frustrated and in a rush, so I simply gave them a link to the home page and said something like, “I am not indexed.” I hit the submit button and nothing happened. The site was already full of a lot of content, so I assumed they could figure that out on their own.

They said they’d respond, but they never did. Speaking of which, they still haven’t responded via email to anything I’ve sent them. But, no… No, they didn’t respond. I guess I don’t blame them. I left a vague message.

Earlier this month, I decided I was going to see what I could do to get to the bottom of this. I searched and searched the internet, trying to figure out why the site wasn’t indexed in Bing. One of the things I learned was that sometimes Bing will mistakenly block a site and that block can be removed. I’m not sure if that’s what happened in this case, but it seems the most likely.

There’s Another Form!

See, I found another form by following various Bing help files. This is the link you need to know about:

Bing Webmaster’s Support

For whatever reason, that link isn’t loading properly for me at this point. None of the Bing tools are working for me at this moment in time. I’m unsure if that’s me, if that’s them, or if they moved the form since I looked at it yesterday.

NOTE/UPDATE: The link does work if I use a VPN. Something weird is going on with their site. My presumption is that they’ll fix it. This is above my pay grade.

You’ll have to log in, of course. If you already have a Webmaster Tools account it will populate some of the fields for you. If you don’t already have said account, you’ll probably need to make one. I already had the account. I’m reasonably sure that you can’t do much of anything without an account.

If the above link doesn’t work for you, search around for the support form and that’s the form you need. You should be able to use a drop-down menu to select indexing as the problem and you should be able to specify your site.

This is what I did…

I made my comment as brief as possible while as technical as required. I explained all those things I explained earlier in this article. The information I gave them was brief, factual, and detailed. I had to try something!

The Results:

After I submitted my information on the form, making sure to select the correct options from the drop-down menu, I got an email confirming that they’d received my request. It was a canned email, an autoresponder message, and it went straight into the junk folder without me seeing it.

Amusingly, I use an outlook.com email address for this and it still went straight to spam. I never got another message. This is what that message said:

Thank you for contacting Bing Webmaster Support Team.

This email is confirmation that we have received your request for https://linux-tips.us/ and created the following Request REQ00063891 . Your ticket is being assigned to a Global Support Webmaster Engineer who will be contacting you about the next steps to resolve your issue. We will get back to you in 10 days.

Take care and stay safe!

Sincerely,
Microsoft Bing

I haven’t heard a word from them since – and that’s okay by me.

Sure enough, on the 17th of this month, I started getting traffic (in my server logs) from Bing users. When I logged into the Bing Webmaster tools, all the data was populated, showing hundreds of indexed pages. Everything was as it should be and even IndexNow appeared to be working properly.

For once, read the conclusion – as it contains more details.

Conclusion:

If you’re having trouble getting indexed in Bing, and you meet their guidelines – including having many articles full of unique and formatted content (along with the rest of the guidelines, like meta tags and the like) – then simply root around on their site until you find the hidden contact form and send them an email. 

In that email, be direct and informative – but don’t waste their time. Let them know the good things you’ve done and the good things you’re doing. Show them that you’ve followed the rules and that your site deserves to be indexed on its merits. After all, the worst thing they can do is just ignore you and not index your site. 

But, of course, make sure those things are true. Read their guidelines and become familiar with what they expect. If you’re using WordPress, it’s trivial to follow the rules. Grab one of the many SEO plugins and set it up correctly. They all have adequate help files. I am not willing to spend the money hiring an SEO expert. The site is expensive enough.

Truly, I’m not an SEO expert. Heck, I’m barely qualified to be a WordPress admin, and some folks would say I’m not even qualified to do that. At the end of the day, if you’re not being indexed by Bing, just send them an email. It worked for me and I figured I’d share the results with other people who may have their sites and indexing issues.

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Set A Timeout Value In cURL

Today we’re going to discuss a topic you probably won’t ever need but is worth knowing, we’re going to set a timeout value in cURL. We’re telling the cURL application to quit trying if it takes too long. I suppose it is something worth knowing, so we might as well learn it.

How often do you need this? Well, that depends on you and your workflow. Me? Well, let’s just say that it’s in my notes. I’m not sure that I’ve ever actually used it productively, but it is in my notes. Now? Well, now it’s in your notes! Or, at least it’s here and searchable should you ever actually need to set a timeout value in cURL.

So, what is cURL? I’ve written about it before (some links to follow) but it’s a tool to transfer a URL. That’s exactly what the man page says. Specifically, it says:

If you want to see what the HTML looks like for this site, you can run this:

(That’s not particularly helpful, but you can do it.)

I mentioned that I’d written about cURL before and it may be of some benefit to read these articles (or at least skim them) if you’re unfamiliar with the cURL application.

Let’s Have a Limited Look at Linux’s cURL Application
How To: Make ‘curl’ Ignore Certificate Errors
How To: Add A New Line With CURL

You can see a couple of useful applications of cURL:

Weather In The Terminal? We can do that!
How To: Find Your IP Address Through Your Terminal

See? So, cURL has some use – even for a regular desktop user. If any of those things take too long, you can set a timeout value for cURL, which is what this article is all about.

Set A Timeout Value In cURL:

cURL is a terminal-based tool. Sure, some GUI applications use it in the background, but it’s a terminal tool. As such, you are going to need a terminal available. You should be able to press CTRL + ALT + T to access a terminal. If not, open one from your application menu.

With your terminal open, the syntax for setting one of the timeout values in cURL is pretty basic and easy to understand. Try this:

The time_limit value is in seconds. If you wanted to load the content of this site’s home page and set a timeout value of 10 seconds, you’d run this command:

(Again, not very useful.)

But, that timeout value is just for time-to-first-byte. So, the server will need to respond within 10 seconds else the cURL process will shut down.

There’s another timeout value for cURL. You can set the overall time limit, that is the entire process (including transferring of data) must be completed within that timeframe. If it isn’t, the cURL process will shut itself down. The syntax for that time of timeout value would be like so:

So, if you wanted to make sure the entire transfer of data was done in under 60 seconds, your command would look like this:

(Again, not very useful – but it should certainly take less than 60 seconds!)

I suppose you might find some of this useful if you’re cURLing files more weighty than a web page. You can cURL actual files and write that data to your terminal’s standard output. That’s what cURL does, after all. So, you might find a use for this command.

Closure:

Well, this wasn’t a very long article. It doesn’t cover a great deal and probably won’t be useful to 99 out of 100 people. That’s okay. Not all of my articles are meant for the 99% and sometimes you just gotta write what you feel like writing. This is what I felt like writing. It probably won’t do well for search engine results and that’s okay. Someday, somebody will want this information, type it into Google, and find this site. Or another one just like it, I suppose…

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Linux Tips
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.