Ranking the White Way [Updated October 2017] – Content Hourlies
Close

Ranking the White Way [Updated October 2017]

In this case study, follow along as we rank a niche music website using only Google-approved white hat SEO techniques.

Every month, I will update this post to show our progress.

Unfortunately, we cannot reveal the actual site being used for this case study. In the past, other people have been rewarded with sabotage for participating in studies. We do not want any negative SEO attacks or issues to affect the study.

Month 1

I personally built this website a few weeks ago using content written by the director of a music and drama school. He really knows his stuff when it comes to music! Thanks, Jim!

The screenshot below is proof that the website has obtained a very decent ranking in a short period of time. These results were obtained without building any links to the site, which you don’t see all too often.

Rankings Whitehat Case Study May 2017

As you can see, I censored out some crucial information, but you can still see the types of keywords we are tracking.

Because of the modifiers, we can rank close to page one without the need to push it with links. With the exception of high search volume keywords, we are able to gain a lot of insight into the website’s performance and the product itself.

Reaching out

For our outreach campaign, we contacted Jim again and asked him to write two new articles for us. These articles are likely to be shared and will take about a week to complete. Once they are ready to be published, my partner will start to reach out.

In return for a link to our site, he will tweet or share the new content on Facebook. He said that he had a lot of success with these methods before.

To make everything work, we need followers on both platforms. I started a paid Facebook campaign for likes, while he builds followers for us on Twitter. Our Facebook and Twitter posts have provided us with a total of two backlinks. Hopefully, more will follow soon.

At the end of the next month I will update this post with a new ranking screenshot. This will help you see the progress we’ve made. I’ll also include details on anything else we have done to make this project a success.

Month 2

My partner bailed and decided to leave the project, so I am setting out on my own. He has the opportunity to work full-time on his own websites, rather than a JV deal, and I understand.

The first thing I did this month was try to scrape 1,000 unique websites that are relevant to my niche. I quickly ran into a few struggles.

Scraping Google

I’m currently on holiday, so I have access to three different IP addresses: the one from the villa I am staying at, my neighbor’s, and my mobile connection. I am able to connect my laptop through mobile hotspot tethering.

I started using Scrapebox, loading in a few keywords for testing purposes. Within minutes, all 3 of my IP addresses were blocked.

After that, I tried using another tool that I bought a few years ago called RDDZ. Unfortunately, I ran into the same issue. With my IPs consistently getting blocked, I was almost out of options.

One of the recommendations I got was to buy proxy IP addresses. My concern is that if it takes 3 minutes to block 3 real IP addresses, how beneficial will a proxy really be? It’s hard to justify spending $2 on a proxy IP if it will only last 10 minutes.

Scraping Bing instead

Bing is a lot more tolerant of people trying to scrape their search results; however, when I entered in four keywords, I only received 31 results. This was very strange, as I had set the results to 50.

So, I decided to try exploring my issues with Scrapebox a little further. Their last update took place almost two years ago, but I wanted to see if I could find a solution. As it turns out, the footprint to detect the Next page button changed over time.

If you own Scrapebox and are having the same problem, go to Settings -> Custom Harvester Settings. You will see a list of available search engines. Select Bing and look for the Marker for Next Page field.

You will see the following code:

“><div class=”sw_next”>Next</div></a>

I pasted a screenshot below for further clarification

Scrapebox Custom Harvester

Scraping a list of URL’s

Because we are doing this with the intention to reach out to people, we want to avoid as many week sites as possible. I personally chose the first 25 results per keyword.

If I have only one keyword, I only see 25 results. To get more URL’s, I need to scrape some keywords before continuing. I did this in a very basic way. I entered in just one seed keyword, adjusting the settings to 2 levels deep. This provided me with 100 keywords to use.

Receiving 25 results per keyword allowed me to scrape about 2,500 URLs. After removing duplicate domains, I was left with about 900 unique websites to reach out to.

Check the screenshot below to see how I scraped those keywords:

Scrapebox Keyword Scraper

As you can see, I entered one keyword and set the level to 1. If I had adjusted it to level 2, it would have gone two levels deep and scraped 10 results for each of the keywords. This would provide me with a total of 100 keywords.

You can also see a Send to Scrapebox button in the screenshot above. This button sends all of the keywords to the keyword list. After that, it’s only a matter of clicking Close to get the screen below.

Don’t forget to select Use Custom Harvester in the settings screen.

Scapebox Lets Harvest

Now I can click Start Harvesting, which can be found hidden behind this drop down menu on the left of Stop Harvesting. Because I chose to use the custom harvester option, it will ask me which platform I want to scrape from.

I chose Bing because I already adjusted the footprint to make it work. Then, I clicked Start.

I used this tool again for this post and was able to scrape a few thousand results, as you can see below:

That’s a total of 2840 URL’s based on 95 keywords.

I’d like to point out the Results Per Engine / kw option. I selected 30 because now that we are using the custom harvester, it ignores the settings on the main dashboard.

The scraping process is complete, so I clicked Close. There’s no need to export any information, as the URLs will not show up in the harvester. I also removed the duplicate domains to prepare for the next step.

Scrapebox Remove Duplicate Domains

As you can see, 1,799 URLs have been removed from the list of 2,840 results. This leaves me with 1,041 unique domains.

Now, I click on Trim to Root. This option helps me judge the strength of these sites. The feature simply means that I get the root domain. For example, https://www.contenthourlies.com.

Determining the strength of these domains:

Scrapebox used to have an option to grab the page rank of each domain it harvested. However, this feature hasn’t been updated for years, rendering it useless. Instead, I will use RDDZ to pull metrics from Majestic SEO.

The benefit of Majestic SEO is that they use an open API at cheaper plans. To get the full API, you need a subscription that costs hundreds of dollars. This option is much better in comparison, especially if you are only using the tool occasionally.

So I export all those domains in a text file using Scrapebox and import them into RDDZ:

RDDZ Data

I already entered my API key and credentials for Majestic into RDDZ. Then, I clicked on the icon to Get Backlinks Data. I have the option to choose from domain, sub domain or URL. I went for domain, as this provides the most reliable outcome when determining strength.

As you can see from the screenshot, it also pulls the topical TF. A lot of Arts and Music results are there, so the list I scraped is pretty solid.

My next step was to remove enormous sites like YouTube, Facebook, and Amazon from the list. I decided to remove any results that had a Trust Flow of 65 or higher. I also removed anything that had a Trust Flow of 4 or less.

Interestingly, there are some very impressive sites with a Trust Flow of 4 that rank for massive keywords. This shows that TF isn’t very reliable, but since page rank isn’t available, we have to use something.

After removing the strongest and weakest sites, I was left with about 800 unique domains.

Collecting emails

I sent this list of 800 domains to my assistant, who will visit each site and write down their email addresses. If there is no email address available, he will write down the contact page URL for future use. This is the first step that people should take when starting their outreach campaign.

There are multiple ways to scrape websites and collect email addresses. You can use expensive tools like BuzzStream or monthly subscription services, but this may be overkill if you only have one website that you are trying to rank.

Because I used two different tools, my estimated cost was $250. Soon, I will list a service on Content Hourlies that creates this list for you for only $25. You can then send this list to your assistant to collect emails, but I may offer a gig for this as well.

That’s it for now. Once I receive those emails from my assistant and start reaching out to sites, I will update this post again!

To stay up to date on our case studies you can also join our group on Facebook, we’d love to see you there.