Let’s kick things off with a warning:  do not try these techniques at home! In fact, don’t try them anywhere; unless you want your site banned. We’ve already explained why it’s important for webmasters to be able to identify spammy or black hat SEO, and here’s what to look out for:

Keyword stuffing is so last millennia

Keyword stuffing is one of the oldest tricks in the book and hasn’t fooled Google for a long time. Also known as ‘keyword loading’ or ‘keyword stacking’ the idea is to dupe the search engines into thinking a page is highly relevant by shoehorning keywords in wherever possible.

Keyword stuffing began life fairly unimaginatively; consisting of paragraphs repeating the same keyword over and over again, or never-ending lists of keywords ‘stacked’ on top of one another which disappeared off the bottom of your screen. Keyword stuffing was a crude technique, but it worked for a while.

As the search engines evolved and started to crack down on keyword stuffing, the black hats became slightly more sophisticated. Monosyllabic text was transformed into keyword heavy copy, and while it didn’t read well it bought them some time. They also discovered that rankings could be squeezed out of the meta data and every tag became a target.

It was time for Google to lay down the law and they responded by developing a series of filters to analyse keyword density. Overnight Google turned the world of SEO turned upside down, and now too many keywords would harm your search rankings. Keyword stuffing had been driven out of town.

Today anyone who’s in the know will agree that keyword stuffing simply doesn’t do it for Google. However, one of the darkest mysteries of SEO is why ‘keyword stuffing’ continues to hold such a tight grip on the public imagination. Plenty of webmasters still follow the line that repeating keywords a handful of times surely can’t hurt? Yes it can, and your website will feel the pain.

Word of warning If you are unsure whether something is black or white, follow Google’s manta that: everything on your page should be designed for human consumption rather than as spider food. It’s certainly a good yardstick to gauge if you have gone overboard with keywords in your copy.

Hiding text is won’t do your site any favours

Another favourite technique for hoodwinking the search engines is to ‘hide’ text and keywords on a page. Camouflaged text won’t be spotted by human searchers and therefore doesn’t have a negative impact on the user experience. But in the search engines’ eyes it’s still a serious crime.

For the black hats it’s one big game of ‘hide and seek’, they develop a way of concealing copy and wait for the search engines to come and find them… then it’s time to find another hiding place. However it’s a game that Google doesn’t like playing, and if you get found you can expect to be removed from their index.

Again the spiders see one page and the searchers see another, but now the differences between the two can be much greater. In the extreme hidden text can be used to make pages rank for seemingly unrelated keywords.

In the early days the easiest way of hiding text was to make it the same colour as the background. Camouflaged text may have been invisible to the human eye, but the search engines soon caught on and adjusted their algorithms accordingly.

The black hats weren’t about to give up quite so easily and discovered that Google had much more difficulty identifying text that was hidden in near identical colours, but used a different ‘hex code’. Google was forced to up their game and they hit back by indexing a range of colours, rather than a single shade.

Another trick for making text and links ‘disappear’ was to use fonts so small that searchers couldn’t see them, but spiders could. Google cleared this one up fairly quickly by simply factoring font size into their algorithmic page analysis. Today it’s wholly redundant technique, yet a sizable number of amateur optimisers are still willing to give it a go.

Word of warning As myopic as it may seem most black hats choose the same hiding place, and that’s at the bottom of the page. If you suspect that something untoward is going on: highlight the area with your mouse and any hidden links or text will appear in the spotlight.

Tags also provide potential hiding places for text, and while Google knows where to look most of the time they don’t know all of the time. Perhaps the most commonly abused tag today is the <no script> tag which tells spiders what’s inside a JavaScript file. But be warned: Google’s watching and working hard to put a stop to it.

By now you should be getting the picture that black hats will try to hide text anywhere and everywhere. Today’s favourite hiding place is in between the layers of a CSS, and works by feeding one layer of information to the spiders and placing another on top for humans. Google’s been having difficulty figuring out the best way of dealing with CSS manipulation, but you can bet they’ll get there in the end.

Word of warning Black hats aren’t just hiding text, they are also hiding links. Internal linking is a great way of redistributing PageRank throughout your site and helping the weaker pages to rank, but a page can only have so many links before it begins to lose credibility. So, hide the links and the problem’s solved, right?

Wrong, Google takes a very dim view of any attempt at PageRank manipulation. Google’s already hot at devaluing/ penalising the banks of links you so often find at the bottom of pages, and anything ‘darker’ could get your site in very hot water. And this includes: miniaturising links, hiding them behind images (or in punctuation marks!) and camouflaging links. 

Doorway pages will lead to a SEO dead end

Doorway pages are designed exclusively to lure search engine traffic, but contain little or no content themselves. Instead they act ‘doorways’ where the black hats can direct visitors wherever they choose.

In practice this means that if a searcher clicks on a result which looks useful; they can be redirected and served-up a page which is useless. Also known as ‘gateways’ and ‘portals’ it’s easy to see why search engines despise doorway pages. And Google’s dislike is particularly potent, as is spelled out in their webmaster guidelines

We frown on practices that are designed to manipulate search engines and deceive users by directing them to sites other than the ones they selected and that provide content solely for the benefit of search engines. Sites making use of these practices may be removed from the Google index, and will not appear in Google search results.”

By creating a series of doorway pages, with each optimised for a particular keyword or a particular search engine, black hats can attract traffic from a diversity of sources. Once they have got the traffic they can do with it as they please, sometimes this means redirecting searchers towards related content, but often there’s a more malicious intent.

Word of warning Not only are doorway pages a gross violation of Google’s webmaster guidelines, but creating a number of very similar pages can trip their duplicate content filters. Whichever way you look at it; doorway pages aren’t a good idea.

Doorway pages seem like such an obvious SEO disaster zone that you question why anyone would consider using them. The sorry truth is that some business heads simply can’t see that they are doing anything wrong. And you don’t have to dig deep into the forums to find some astonishingly well-known companies who have had their fingers’ burned.

Over time Google’s definition of doorways has broadened to include any pages which aren’t the ‘intended destination’, and this includes pages designed solely to generate PPC and affiliate revenue. After all, how could it be to Google’s commercial advantage to direct visitors to a site whose sole intention is to direct them elsewhere?

Some webmasters tread a fine line between creating ‘information pages’ and ‘doorway pages’, but it’s a high wire act that all too often goes wrong.

Redirecting pages won’t fool Google for long

In the early days of SEO black hats would use the meta refresh tag to redirect searchers from doorway pages to their duplicitous destinations. It wasn’t long before the search engines got wise and reacted by ignoring all pages containing the meta refresh tag. Black hats and spammers were left with a problem, which they solved in one of two ways:

The first involves placing the redirect in JavaScript, where it can’t be readily read by the search engines. The second is to action the redirect when the searcher’s mouse is hovering over the link; then make the doorway page into one giant link hotspot thereby forcing the redirect. Needless to say that while both techniques work, Google isn’t particularly impressed by either of them.

Cloaking is old school but still used by the cowboys

Cloaking is one of the more complex techniques in the black hat’s bag of SEO tricks. Again it works on the tried and tested principle of showing one page to the spiders and a different page to the searchers, but there’s one big difference: cloaking isn’t always used with malicious intentions.

While black hats exploit cloaking to skew the search results other webmasters use cloaking legitimately to customise their user experience. Of course this puts Google in a very difficult position, and they still haven’t decided how to lay down the law. There are two basic methods of cloaking:

  1. Agent delivery

Whenever a browser or a search engine spider requests a page from a server they must first identify themselves by declaring their ‘agent name’. The visitor tells the server their name and the server knows exactly where the page is going. If a page is requested by a browser it will be read by humans and if it’s requested by a spider it will be read by robots.

Black hats create two versions of a page: one which is heavily optimised with oodles of tasty spider food, and a second which is strictly for human consumption. They then instruct the server to dish up the different pages depending on the ‘agent’ making the request. Despite the fact that both visitors have requested the same URL, they end-up with a completely different page.

The potential for black hat abuse is clear, but plenty of upright webmasters have accidentally strayed into this lawless territory. Cloaking can be used to compensate for some of Google’s technical shortcomings, such as getting ‘image heavy’ and ‘Flash based’ websites to rank.

However, Google isn’t happy with anyone interfering with their work and threatens ‘Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.’

Besides talking tough the search engines take action by continually tweaking agent names so that servers won’t recognise them. It’s enough to keep the black hats on their toes, but won’t stamp out ‘agent delivery’ cloaking entirely.

  1. IP delivery

A variation on the above theme, but this time black hats use the visitor’s IP address to identify who is requesting the page. It’s easy enough to buy a list of all the IP addresses used by the search engine spiders, and then all that’s left is to instruct your server to deliver one page to spiders and another to searchers.

IP delivery cloaking causes a bigger headache for Google because it’s also used by websites as a way of delivering geo-targeted or personalised content. If you are an upstanding webmaster, who’s lawfully involved in cloaking, it’s still worth asking yourself whether your pleas of innocence would stand up to judicial scrutiny by Google’s spam team. If the answer is ‘no’, then you can expect a hefty sentence.

Swapping pages really isn’t a good idea

Perhaps the most straightforward way to get a page ranking is to ‘swap it’ for one that’s already ranking. It’s devastatingly simple trick and it’s surprising that it ever worked. Black hats would optimise a page for a high traffic keyword, get it ranking, and then switch the content to their preferred money-making vehicle. It’s that simple.

Most black hats have got thick skin and won’t lose any sleep over the frustration they cause searchers and search engines. However, there is one factor that has more or less consigned page swapping to the history books, and that’s Google’s accelerated crawl frequency.

Page swapping only ‘works’ until a spider visits and realises that something’s up, whereupon the page will be algorithmically dropped from the results. In the not too distant past there simply weren’t enough spiders to go around, and top ranking sites could go for days, or even weeks, without being crawled.

However, today it’s a different story and you can expect a top performing page to solicit at leastone visit from a Googlebot every 24 hours. While the other search engines have some catch-up to do, Google has nevertheless managed to put the brakes on page swapping.

Page swapping may be on the wane, but there are plenty of optimises who advocate building new sites on the back of existing websites in order to harness Google’s hard earned trust. For now it works, but it’s blatantly ‘black hat’ and transparent to the human eye… which means competing webmasters will file spam reports.

How interlinking could get your website banned

It’s no secret that Google’s a bit precious about PageRank and to provide protection they’ve spent countless hours dreaming-up ways that black hats might manipulate the search results. So if you think it’s as easy as building a bunch of websites and interlinking the lot, then you had better think again.

Google knows all the tricks in the book and the days of interlinking a network of sites are on their way out. However, for a considerable time interlinking was a license for the black hats to print money. It was a straightforward strategy, just take a website then rehash, repurpose, repackage and republish it. Link all the sites together and wait for the money to come rolling in.

And it worked, until Google got wise and learned how to recognise linking networks. Today pooling links from a network of sites is unlikely to provide any benefit and is much more likely to get you in deep water. It’s also worth noting that not all penalised networks have black hat motives at heart; even heavy linking within a network of high quality websites can cause serious problems.

Link farms work on the same principle, except now you are linking to other people’s sites rather than staying within your own network. Not only are you running the risk of being flagged for interlinking, you also run the risk of tying your site to a bad neighbourhood. Steer well clear.

To sum it all up Google’s getting pretty good at detecting black hat techniques and as a result the arts are getting darker and darker. Many of today’s black hat practices aren’t just unethical, they are illegal. Don’t go there.

Leave a comment

Your email address will not be published. Required fields are marked *