Saturday 31 May 2008

Think of what you are saying...

A customer this week sent me a rather hotly worded email telling me that if I wasn't up to the task why couldn't I tell him and why had it taken me so long to get the website back to him. My reply, in rather a few more words, was to remind him that 4 weeks ago I'd sent him the links to the live site and asked him to review it and that after my emailing him several times, he said he was too busy with his garden to review the site.

Why then am I getting the blame for the delays - when I've been waiting 4 weeks for a reply? He also said he didn't like the look of the site, which is based exactly on the slides his graphic designer sent to us. Looks like this could be a fun site to finish off over the coming weeks... A possible redesign and goodness knows what else, at least this job is not on a fixed price.

Whereas another job that has taken most of my time this week is a fixed price contract and my sales guy is letting it run away. This customer is desperate to go live the middle of next week, let I'm still waiting for the latest changes to the items page, which need to be completed before he'll sign the pages off and I can actually build the site. Even if I get the latest changes lsit on Monday and can get the changes done the same day, it's not leaving me much time to build the database interface, especially when I have other customers wanting work from me.

But that's life - and the sooner we reign in these daft customers the better - the more new sites we can go live with.

Friday 30 May 2008

The redesign is live...

Well the Cottage Holidays website redesign has gone live overnight and bar a few teething problems, the site is up and running. I managed to sort the Google Maps problem before publishing the site so the cottages do show a location map - now I just have to see if I can include the maps into the tourist information pages...

It didn't do quite as straight forward as it looked to be. During final testing I noticed that only 500 of the holiday cottages had been loaded, out of the 1700+. Plus there were also a selection of French properties, that didn't have anywhere to go and crashed the load.

Once the full selection of 1700 went on it was a lot more difficult to load, but I did get it there and now the latest database is loaded. I'll be loading the database again over the next few days to pick out the new cottages as each region's home page and the site's home page do list new additions to the site. At the moment they are just showing random cottages loaded today - not what I wanted to show!

The only glitch that I've noticed so far is that in a couple of places the region is not decoded in the display of the cottage name. Not difficult to change, just a lot of places to change it in.

Hopefully I'll manage to get the search engines interested in the site and over the next few weeks and months will be reporting increased traffic. Can't be much worse than the current month has been! Once we hit the end of the month I'll post the results as another benchmark of where the site is at the moment and in a few months I'll be able to look back at both sites and sets of data to see if the redesign has brought in more traffic.

Thursday 29 May 2008

Cottages rebuild almost ready...

The first phase of the holiday cottages website rebuild is all but completed. It's still short of several aspects, but most aren't needed for a go live. Yes, it's been a long time coming, but over the last week or so I've not had the time to move it along...

The main missing features at the moment, and the only ones that need sorting before I put the new version live, are Google Adsense adverts and the Google Map feature.

Google Adsense is easy to sort - I just need to swap machines and then plug them in. The maps are proving a little bit harder. In theory it's easy, in actual fact it's a tad difficult. The code is workign stand-alone, but as soon as I try to include it into the page it's full of errors. And again, this machine doesn't show them. It looks like the phase 1 release of this site will need to blank out the Google Maps - shame!

Building the site wasn't quite as much of a pain as I expected, especially the creation of all of the pages. I thought that creating around 3,500 pages might need to be split into many scripts, but it's done in just the single step.

The plan is that over the morning I'll get the Google Adsense in, probably blank out the map section, and then rerun everything to put the new site live and let Google et al get trawling the new pages.

Then I can start adding the tourist information and hotel information onto the pages - anything to really make the site have plenty of unique content. I'll keep posting as it develops.

Wednesday 28 May 2008

Let your webdesigner do their work...

Whilst it's actually very nice when a customer arrives knowing what they want, along with drawn out screen prints to work from, it can be taken too far.

One guy I'm working on at the moment took my first sample and put it into photoshop and moved things around a little to explain how he wanted it to look. That's great, but then when the instructions start to tell me how to optimise the site and what I'm doing wrong - then the instructions are going too far! It's especially awkward when the instructions are based on old information - "put in loads of keywords in the keyword metadata", "put some text at the bottom of the page in the same colour as the background", "list all the keywords in the picture alt tags".

Fine, thanks for that help, but the ideas you are suggesting at best don't work, at worst can get your site banned.

Here's the simple truth. If you are getting a webdesigner to build and maintain a site then don't go for over the top on page optimisation. Just go for well built, well designed pages. If the page is over optimised using dirty tricks then when the search engines change their algorithms your site is going to drop, and maybe get banned. If you want to be tricking the search engines using the latest discoveries then you need to be in a position to be able to change your site at a moment's notice - not join a long queue of work to be done as maintenance.

If you are paying someone to do the job for you, then let them use their knowledge and build the site for you - don't tell them their work, no matter how much you have researched. Or will you be telling the pilot how to land because of your research next time you go on holiday?

Tuesday 27 May 2008

I'm not really that negative...

I don't want to appear to be too negative when I'm talking about checking reciprocal links - just realistic. There are a good number of sites out there that are genuine, but then there are also a good number that aren't. It's less than 10 days since I ran the links check on CompareMortgageRates and when I started the process today there were exactly 400 sites in the directory. Of these, 17 no longer link back to me - yet they were when I looked less than 10 days ago.

Extrapolate over 31 days and that could easily have been around 45 sites missing link backs - over 10 percent! I had a look through some of the sites with missing links and those that I looked at were all 3-way links - initiated by the other party. It's what I was saying in that article - the person being paid to search for links suddenly has no financial incentive to keep my link there so pulls it without warning.

I see no harm in 3 way linking, if done fairly. But in my experience it's never fair. It's usually done so that someone being paid to promote a site can do so more easily. If they took the time to install link software on the site that is paying them to do the work then there would be a long term investment - the links directory would be there for as long as their customer wants it. I suppose that's the difference between my way of working and others. I would give the customer full control, they want all the links to be able to slowly crumble once the customer stops paying.

Give me a fair link and I'm happy!

Sunday 25 May 2008

Does my aggressive link building work?

OK, so over the last month I've become very agressive / determined in deleting links that don't benefit me. Does it do any good?

Well, as I was reviewing my list of link check dates, I made an observation. In the past I've recorded the date that I've checked that links are still active. On the whole, I've deleted links that are reciprocated any more and noted that date. It was only when I neglected to do this that as an experiment when I remembered, for a couple of customer sites I tried deleting more links (they were so full of links after not checking for a few months they took ages!).

My sheet shows that from January to July 2007, every month I carefully ran my link check every month. Before that it was pretty adhoc.

There was also, at some point either before or just at the start of that period, a time when my CompareMortgageRates directory became corrupted and wouldn't add links for a few weeks - but people kept trying. Eventually I rebuilt it - with a lot of links missing. It was shortly after that it hit PR5.

What about normally? Well last year, the site's performance improved until October, then tumbled. It picked up again in January and February this year, March wasn't so good, April better and May not so good.

And whay about the links check? Well as I said above, until July I was checking monthly, then forgot it until October, then again until January. So when I was checking and for about 3 months afterwards, the site traffic was good. And when I did a full check in October, about 3 months later traffic picked up. Same again after my January check.

So it looks like careful admin of the links directory is important - let it slip and traffic falls. By checking it every month the traffic is at its best. When I let the checking slips, a couple of months later so does the traffic.

It's not instant - and that is obvious. It takes the search engines a few weeks to pick up all of the links pages and the links pages pointing in and then for Google etc to fully update their directories. Whether being more aggressive will help, I don't know. But with what happened when the directory crashed, it could be the case.

Friday 23 May 2008

Am I right to delete so many links?

Well, for a start, let's make it clear. They are my links directories - so I am within my rights to do what I want! Ethically though, should I have mass deletions? Does it affect my page ranks?

Well I first tried a mass deletion back in early December - over 6 months and 3 page rank updates ago. I tried it on a PR5 and a PR4 website - and both still hold their page ranks. More than could be said for other sites. What's more, the page rank of the directory pages on these sites is quite good.

But why do I feel I should do it? I've noticed that many of these links that come back as grey barred pages are on pr5 / 6 or low pr sites. In the first case, I'm suspecting foul play is getting the other site a good ranking whilst hiding the link page my link back is on. So why should I help them when they don't help me?

In the second case, especially the pr0 / grey barred sites that I've been linking to for 18 months, something is wrong with the other site, so it is probably best to get out of that relationship.

I do state on the better PR links directories that I will only exchange with PR pages. This leads to people putting links onto home pages, submitting their site, getting them approved then either removing or moving the link to another page, hoping that my software will pick up the new location and I won't notice the PR drop. Very friendly when the recipient is getting a link on a PR3 / PR4 page.

Also, a lot of these are casino, mobile phone and adult orientated and absolutely nothing to do with the link page category. This is hindering the effectiveness of the page for people link swapping the way I want to be working.

Lastly, by filling up my directory with these useless swaps it is making it far harder for me to run my monthly checks. This means I'm not maintaining it as well as I should do. This is affecting the PR of the page and probably the site.

How would I feel if it was done back? Well, I've had enough link swaps rejected because my link page hasn't got enough PR and I estimate that 30% of link swaps are no longer reciprocated - the person requesting them have removed their link for some reason, and not told me.

So it seems I'm not the only one. Some of my link pages may also be grey barred - but the person requesting the link exchange can see that when they make the request. And if I never do anything to better the PR of these pages, then they will always be the same. Something needs to be increased!

Thursday 22 May 2008

Am I too aggressive?

Some people may say that I'm too aggressive when it comes to deleting links. But then I do fairly well accept most link requests and then just delete those that at a later date I decide are not benefiting me.

Should I do this? Well, it's my link directory - I'll do as I like! If the other site's owner is upset that I've deleted their link, then maybe they should have given me a better link in the first place.

OK, my link pages aren't whiter than white and many are not page ranked. But it's got to start somewhere and there's a good chance that those pages that aren't page ranked, but are obviously very popular with people requesting links, could be being hindered by all the spam requests. And if that's the case, then the good sites I'm linking to aren't getting a fair exchange because of the other sites.

So I like to delete links that aren't beneficial or relevant. At least I'm that open to requests! And as I've already mentioned, I only expect pages to be page ranked after they have been through 2 full page rank cycles.

My problem is that some idiots set up links that are absolutely irrelevant. Why try to fill my 'activity holidays' page with links to fake rolex and casino sites? OK, take over that page and they then become relevant to the theme of the site and give me a link back from an uncached page. Is this what the owners of the properly categorised links want, who are giving me links from PR2 pages? I'd much rather have 1 link exchange with a similarly themed site that gives me a link from a PR2 page, then 50 casinos linking to my travel website from grey barred (and usually uncached, or sometimes using rel=nofollow...) links pages.

Yes, it's harsh just deleting the links. I did used to send the owners an email first asking them to put the links back, but very few replied and with many the email just bounced. So now I just delete the link and have done to it. Some of them email me months later asking where their link is and telling me how important it is to have a link from their site. If I'm generous I look at their links page and usually find the reason I've deleted the link. Not one has so far inspired me to put the link back in place.

Wednesday 21 May 2008

What to check on existing links

Every month, at least, you should check ALL of the links you have exchanged. If any aren't playing ball, then contact the site owner, or be like me (a lack of patience) and just delete them.

Why, well why link to a site that's not playing fair and linking to you? It's of no benefit to you. Removing the link teaches them a lesson and means that the number of out links goes down and the space on your links pages goes up - more room for good links.

So what are you watching for?

First and foremost, is the site you are linking to still live and is the page that your link was put on also still live? If the site has gone, certainly delete the link. If the page has gone, then either ask the owner why or delete it. If your link isn't on the page you were expecting to, it's possible that you have moved up or down a page, so check that.

Next I refer to this blog and lock back to see when I've last recorder page rank updates. I give sites the benefit of the doubt, I'm not looking for the most recent update, but the date of the update before that. That would mean that the check in progress looks back not to the 2nd May update, but the 29th February update. Allow a week before that (it takes Google a week to prepare page ranks) and (bear with me) I then expect sites that I've exchanged links with prior ro 22nd February to at least have a page rank 0. My thinking being that they have been included in 2 page rank updates, so should have been noticed by now.

If the page linking to me is grey barred, then I consider deleting my link. It's likely that either there's something wrong with their page structure or site, or they are blocking the link.

Also, and my link building software does this for me, check that the site owner hasn't changed their robots.txt file and hasn't slipped rel=nofollow into your link. If they have, they are up to sly tricks. You could contact them and ask them why, but if they are up to tricks, they will probably do something else.

My moto when it comes to accepting links is simple - "if in doubt, delete it!"

Tey, that's pretty harsh. Why, well where's the point in having 200 links in from non-cached pages? I've just deleted around 250 links from my comparemortgagerates.co.uk website, leaving around 400 active. Why, well most weren't linking back to be for whatever reason, but the rest just weren't giving me anything in return.

Remember that Google's guidelines don't have anything against structured links directories of selected sites - it's free for all links they don't want. As long as there is an element of links being rejected / deleted then it is accepted that you aren't free for all and you shouldn't upset Google and so on.

So I've just made room on my links pages for plenty more new sites. No doubt 30% will quickly remove my link, but I'll remove theirs in return.

Tuesday 20 May 2008

What you should be looking for when you accept a link?

I've already mentioned that there's a grey area about accepting / rejecting links because the offered links are on pages that aren't cached. The page might be cached because of bad practises, or because it is simply too new.

But what else should you be looking for?

Well, if the page is cached or has a page rank (even 0), then that's a good start. It means Google has found it and has access to it.

What it it's not cached / page ranked? Go to the site's home page and look for their resources / links page link. Click it and then probably you will find yourself on the link's directory home page with a list of categories. From here try to click through to the page your link is on.

If you can't find the links page by click from the home page (sometimes you need to go via the sitemap) then search engines aren't going to find it either. And that's why it's not cached. If you can't navigate from home page to links page then I would alsways say no.

As you click through the pages watch the page rank. If suddenly it disappears then go back to the previous page (with page rank) and look at the link. Has it been hidden in any way, e.g. through rel=nofollow in the link.

Also take a look at the site's robots.txt file and make sure that's not blocking the links directory. There are many different tricks here. Basically, have a quick look and if there is any blocking then reject it. Otherwise, give the site the benefit of the doubt and check again in a few weeks, or after the next page rank update.

Other things to look for include rel=nofollow on your link. With a clever bit of coding, actually not that difficult, a person will see an OK link whereas a search engine will see a link blocked by te rel=nofollow, or hidden totally.

How do you detect this? Quite simply, look at the cached version of the page and then view the code. If there is javascript hiding or altering the link or it is any way blocked then you will see it in the code. And if the site is presenting a different version of the page to search engines, now you will spot it.

Monday 19 May 2008

Do you check reciprocal links?

Do you check your reciprocal links to ensure they are still reciprocal? You should do?

If you use an automated link building directory software, like I do on many sites, then many people are adding their links to your sites. And I regularily check to see who is actually linking back. I expect, through experience, that around 30% of people requesting links with me will actually remove the link to me within a month or 2!

Now some lost links will be expired sites, mistakes, pages temporarily unavailable, but not 30%! Most are new requests that no longer link to me.

These are people that are trying to get 1 way links in and are assuming that I'm probably not going to be checking to see if they are linking to me. It's a hard job to check across so many sites, but it needs to be done. Else you end up with page after page of links out and hardly any links it.

It's devious of these people, but with decent link building software it's easy enough to do. You just press a button and let the software get on with the job. By removing these people that aren't linking to you, your links directory is cleared up and leaves space for new link exchanges:

1 - this keeps the content on these pages fresh

2 - you aren't linking to unnecessary pages

3 - if keeping to under a limit of links, this prevents you having to create new pages, which means people complain that they are on page 2, which isn't cached, please move me to page 1, which is PR3...

Sunday 18 May 2008

Why I dislike 3-way linking requests

I mentioned yesterday that I don't like 3-way linking requests well why?

Well people think that it's tricking the search engines and anything you do to purposely trick search engines can get you into trouble with them, they don't like it. It can cause you to be banned, although I don't think that 3-way linking alone will get you banned, yet...

But why do people like it? Well they think that if site A links to site B, site B links to C and C links to A then search engines will think that all 3 have 1 way links. Clever? Not really. If search engines can spot 2 sites that have the same content hidden away on different pages, then I'm sure they can start to spot 3 way links. It must be easy to see that all of the sites that site A links to all point to site C. They might then class these as cheating the system...

What does really annoy me is the quality of 3 way link requests. Usually it's an SEO firm who are building for a client and want a link from your home page to the client's site, in return for a link from a directory they have set up somewhere. Not really an equal exchange. If they are being paid to link build, then they are only really interested in what happens now - that's how they are getting paid. The directory site (site A) has been set up just to hold their links out. If it gets banned from Google, they just create a new directory and start again. This means that your site, site B, isn't gaining from the relationship.

And what heppens when their customer decides that the £50 per month or whatever can be better spent elsewhere? They just delete all links from the directory to start again. Do they tell you they have done this, probably not. This leaves you linking to the customer, with nothing in exchange.

At least with reciprocal linking you are linking to and getting a link from the site that has an interest in getting a better rating. With 3 way linking, the site linking to you doesn't have to matter. It's only there as link bait.

So if you ever email be a 3 way linking request, guess what - I'll be ignoring it. If you are that desperate for 1-way links, then get to work linking another way.

Saturday 17 May 2008

Which links should you accept?

This is always a tricky question and basically, there's no one answer.

Many people like to only accept links from link pages that are cached in Google. It proves that Google can get to the page and that it's worth having the link. But what about on a new links page, even a new website? It's quite possible that the page won't be on Google yet, but will be in a few days or weeks, maybe even a couple of months. And if you refuse the link because it's not on a cached page and then in a few months the site owner has worked hard at link building and the page has a good page rank, then you have missed out.

So the fact that a page is not cached does not mean don't accept it. But what you can do is to accept the link for now and then review it in a few weeks or months. If then it's still not cached, ask the website owner why and tell them you want the link moving (but make sure first that your own link back page is cached - don't fall for that trick). If they don't answer, or don't comply, then it might be best to remove the link.

There are a load of other tricks that website owners can use to hide links from search engines. I am not suggesting you use any of these - just letting you know how people hide links to avoid being caught. If you do start using these tricks, once other people find out you will use all of your links in.

But why would people be using such devious tricks? Well, it's well supported that one way links are far better than reciprocal links. Some people try this by 3-way linking, more about what I think about that later! So by hiding your outgoing links from the search engines, the search engines will think you don't have pages of outgoing links and that all incoming links are one way. Your link popularity shoots up and those linking to you get no benefit back.

Most people don't notice and this continues. Then at some point the search engines notice this false oneway linking and see that you are hiding content. This can result in being banned from the search engines!

Friday 16 May 2008

Can't see the wood for the trees?

Sometimes I feel as though customers allow their attention to detail to run away with them too much and they can't see the wood for the trees. There's a coupld of customers' sites that are built and ready to go live, but in each case something is stopping them that's just too much attention to detail.

In the first case, the site is wonderful, works well and has had the stock loaded to it. It's published and ready to sell, apart from one missing essential - a home page. The home page isn't there - so anyone typing in the URL assumes there's not a site there. It's a shame when so much effort has gone into the site and the URL is being advertised.

Why isn't the home page 'ready'. Well, she gave us a logo to use and basically just want's this large graphic banged slap in the middle of the screen. Without checking, it's 500 pixels wide by probably 400 high. So it fills up most of the screen. But she want's it a little bit wider. Not much, just a little. So she's gone back to the original graphic artist. When she extended it and I published it, the customer didn't like the feel. So it's back to the drawing board. All for the sake of a few pixels. The graphic looks fine and I'd be happy to be a customer of the site.

In the second instance it's just nit-picking over fine details on the screen - those links in the top right should be the same size as those on the left was one of them. They were 8px and 9px Arial font respectively. Then some pictures needed 'balancing' more accurately - they were moved apart by 2 to 3 pixels. And the list of changes goes on.

Why? OK, if they make the site look horrible then yes, get them sorted. But most customers aren't going to notice these problems, and he has the ability to change the pictures anyway - the sizing will affect the spacing.

What is wrong with this? Well, apart from wasting time in development, the sites are on hold, not earning the customers' the income they should be. Another recent customer picked over his site and finally agreed it was finished and then started receiving huge amounts of orders each week. That's that many lost each of the previous weeks.

Does it matter if these details aren't right? I think not. I like to review the style of the site anyway once it's live and make changes if required then. As long as the site looks good and gives the site's customer the feeling that the shop is professional and not going to run off with their money, what does it matter if links are 8px or 9px? If they can be read, they can be read. If the site looks good enough to convince people to buy the products, they will buy them.

Thursday 15 May 2008

Geocoding ain't so easy!

My redesign of Cottages-4-Holidays hit a stumbling block last night when I tested how accurate the postcode to longitude & latitude file worked. I downloaded the file a few weeks ago, when I first put my ideas together, and was really excited about what I'd found.

But when I tested it, none of the properties I expected to see where there. I'd test loaded a popular tourist attraction and looked at when properties I was claiming were close by. I checked these against the list that mentioned the place in the advert and they weren't included. Then I looked at where the properties were - and they were well off!

It doesn't affect the main build of the site, I can continue there. But it does affect how quickly I can load up tourist information. The problem is that all longitudes and latitudes were being rounded down to the previous whole number. So -1.988 etc become -1. Given at the same time 52.898 or whatever became 52, that was appearing as a fiar movement across the country.

The cottages themselves have geocoding supplied by the merchant, so I will trust them. But my own geocoding needs looking at. I have basically 2 choices:

First - the quickest choice is to again guess the geocoding based on the postcode - look for cottages on the database with similar postcodes and assume the same geocoding holds true. Probably a lot less effort, but not as accurate and what when there's no property with a matching postcode?

Second - every time I find somewhere to add, which will be quite a lot at first, I need to use a tool to geocode the postcode. Much more accurate, but it's adding time to the process and that will mean that in the end I get fed up quicker!

I suppose if the stats show that the site is starting to work again then I'll find the time and energy to put into lots of properly geocoded tourist attractions. But I was going to write an admin function to load new ones and get my wife to give me a hand! She'll just have to learn how to geocode as well!

Wednesday 14 May 2008

Cracking the XML

After a lot of struggling I finally got the XML feed working and the first 2 directory pages, linking to 9 merchant pages are up! I started on the 2 smallest categories I could find and even they took a while - and I'm not even 10% of the way through the list of merchants! It should get quicker as I work through them, I was still ironing out problems and smoothing the system into place at first.

So I've created an XML based feed for Mortgage Providers and Car Breakdown Providers. I suppose that by adding 1 new section a week, or maybe say a handful of merchants each week, then that's probably best as far as the search engines are concerned.

If I go full out to get all of the merchants up it would be a good sized directory structure, but there might be more appeal to the search engines if I can build it steadily. That way, every time they return to the site there are new pages to uncover. It's giving a more maintained look to the site. And along with the merchants updating their XML feeds, this will provide a series of pages on which there are regular updates.

Hopefully, this area will be wonderful bait to the search engines and will start driving in more traffic to the site, and maybe even a few more affiliate commissions. It wouldn't take many affiliate commissions for me to be really happy. Some of them pay really well and just 1 payment would out a huge smile on my face!

I did decide last week, aside from this area of work, that to get traffic back to what it was on CompareMortgageRates I probably needed to start adding new content pages weekly. Well, whilst I work out what I'm going to add this is a good starting point! I'm also trying to add 'news' to content pages (as well as the home page) to also keep these pages fresh. Since the site made it's drop through the search engines I've been trying my best to recover it. One day I will, but the traffic has dropped further over the last week, even though it's position on Google searches remained constant. Presumably other search engines have now also lowered it!

Tuesday 13 May 2008

XML Feeds - Made to make you work?

I decided last night to have a play with the OMG XML feed for their editiorials, intending to use it to display merchant information on my mortgage rates website. Basically, starting a financial directory that I can later add to. In the day I'd noticed another site that ranked quite well doing this, but manually updating the displayed information (the offer details were out of date).

Now OMG provide their editorials in 3 formats - javascripts, which I usually use, I-Frame, which I've never tried, and XML, which I'd not tried before from them. I've used XML from other providers plenty of times - other affiliate schemes, news readers etc. So I've got plenty of the basic code about.

The problem with Javascripts and I-Frames is that it doesn't add anything to your webpage as far as the search engines are concerned - to my knowledge, they just ignore these parts of the code. At most, they will actually follow the I-Frame (I have seen that in some of the sites I've built), but they the benefits of the content lay with the provider. There's not lots of pages of text that I can create and add further bits to.

So taking the XML feed in real time seems a good idea. Why not just copy the text?Well, as I mentioned above the site I saw yesterday did this and said that the insurance offer was a 15% discount, whereas it is now 20%. OMG don't like affiliates taking the text alone as out of date offers and rates are displayed. And I don't want to create a large nightmare for myself of continually having to update text - it's bad enough when they don't update the dynamically served content and email me to complain (you know who you are if you are reading this!!!).

So last night I started to build into my standard code the OMG feed. It looked as though it would be easy - not exaclty a hard layout to use. But I was playing with it for hours as it just wasn't working. 'ROOT' was appearing at the end of the text - it's actually from the closing tag (<\ROOT>) and control characters were appearing where they shouldn't be - something like &#xA; instead of carriage returns. I thought it should be simple, change &#xA; to <br>. But it just didn't work. I could remove the & on it's own and the #, but the string just wouldn't go.

Then when I looked at the output in notepad it appeared that the text was full of non-printable characters. Between every displayed character appeared a space in notepad, which didn't appear on the screen. Not knowing what these were, made it very difficult to remove them. I suppose some sort of regular expression could have done it - just thought of that now!

What their purpose is and whether it's something I was doing wrong, I just don't know. But something wasn't right. It could be that they are there intentionally to make sure that the text isn't read by the search engines - either to stop them caching text which goes out of date or to prevent problems with Google's duplicate content filter. Either way, it was a pain I was trying to sort for about 4 hours. I had hoped to get the feed going in 30 minutes and spend the rest of the time getting most of the pages up. It wasn't to be.

Would appreciate any more thoughts on this problem and hearing what others have done.

Monday 12 May 2008

Which Webpages Get Page Rank

Back to looking at which pages get page rank and which don't inherit anything.

Recently I took on a new customer with an existing site. He was a bit upset that his current webdesigner was going to charge £60 + VAT just to renew the URL (the guy is using the same registrars as me and they charge under £8 plus VAT). He was also shifting his concentration slightly away from scooters and more to motorbikes, so wanted a copy of the website under a different name.

It's an OK looking website so it was just hosting changes, plus I gave him a content management system so that he could change the bikes he had on display - with his previous designer he had to send him the details and once every few months the bikes would be changed. That was another bug-bear!

So I took over the website, created new pages to display the bikes (PHP rather than HTML to access my database) and created a copy of the site, with a different sort order - let's avoid that duplicate content filter.

He's just asked me to make a slight change to the sites, so I was looking around the pages. Most of the pages, including the new ones, are now in at PR0. In fact, both sites have the same distribution of Page Rank.

On both sites the home page is PR0 - not something to be proud of, but then it's not an SEO optimised site - it's a contact point for his magazine adverts.

On both sites all of the bike details pages are also PR0.

The enquiry form, with very little content (just the field names) is PR0 on both sites.

The contact form on both sites, with matching address, phone numbers and email address (word for word the same - both display the one email) are both PR0 - so much for the duplicate content filter...

But the 'company' page - with details about the company and 125 words in the text as grey barred on both sites. Both pages are cached, so it's not that the search engines haven't found them.

The only major difference, and this is because it's a site I've inherited rather than designed, is that the company page is reached only by an image map link, whereas there are text links back to the bike pages, the home page and the enquiry form. There's not even alts / titles within the image map - something I would have done if I'd written the site myself / been asked to optimise it.

But this alone is not stopping the inheritance of page rank as the contact page is also only available through the same image map. And that's got PR0. Depending on how you count it, it's only got around 20 words of text on that page.

So, the company page isn't grey barred (on both versions) because it's only linked through an image map and it's also not grey barred because it's not got enough content - the contact page has less. So what can it be?

The smallest bikes page has around 250 words on it, along with pictures of 4 bikes. There's no links to other pages or anything, you have to phone for details.

So I think it's one, or a combination of:

- Google doesn't like the duplicated information, but is happy when the duplicate information is contact details.

- Google doesn't like the content.

- Google is cleverly detecting which pages could be of interest to people.

I suppose it is the pages that are most likely to be of use to people searching the internet that have page ranks. People aren't likely to be searching for the company history, but might want to contact them after seeing a magazine advert or might be searching for an offer on a bike.

I think a few more investigations are required to see the effect of page rank on other sites I manage.

Sunday 11 May 2008

What does the redesign include?

Well, for a start the redesigned site is still under development so it's hidden away. So I can't show you it (could get confusing if search engines got hold of a development version!). But here's the features I'm including and why. Then, unless I make excellent progress and get the new pages ready to go live, I'll leave this theme for a bit and look at what I was on before I distracted myself!

First, I'm going from building it on my pc to loading to a database - this means the process is quicker and can conceivably be achieved in around 10 minutes' work - to be proven when I write it!

Second, I'll now be able to identify new properties and highlight them on the home page and other main pages - getting cottages listed quicker.

Third, I'm building (this is going to be a long ongoing process) a database of UK tourist attractions. Each property & town will link to the nearest ones I find. This adds unique content, and people might find the site searching for these attractions.

Next, when the owner has mentioned a tourist attraction in their text that I have a page for, I'll change the description to link to it. Makes the internal linking fuller.

Next, when the owner mentions a town in a description, I'll link to that. Makes the internal linking fuller.

Also, I'll display the nearest properties to the one shown, just in case it's not suitable - it also helps add more content to the page.

Possibly I'll also take a feed of hotels I have access to and include nearest hotels on the page - again, more content and possibly more commissions.

I'm also adding a Google map showing the property - adding more value to the customer. But the map doesn't seem to want to work at present... Works fine, until I try to include it in the code...

I'm also monitoring which pages are hit and linking to them as the most popular etc from the home page - helping customers find the most popular properties; telling me which are the most popular properties and getting the search engines faster links to these properties.

I'm sure there's more, but that gives a flavour of what the redesign is all about.

Saturday 10 May 2008

Benchmarking The Redesign

How will I know if the redesign of the website is actually of benefit? What would be a benefit - what makes it worth my time redesigning a site that has basically flopped after a once proud time?

In short - income! This is derived from 2 factors - search engine visitors arriving on the site who then either book or click Google adverts.

Now, I've already said that the success of the site varies month on month. January / February are usually the best times of the year for the site. So what's a good way to benchmark the site?

I could look at the current position and compare that, but I wouldn't know if changes, or more to the point the percentage of changes, are being derived from my work.

So the best way forward is to just update one of the 2 sites - the least well performing one, wait for the search engines to pick up on the changes and hope that I see massive swings.

Last month, the second site got just 4,942 page impressions and the first one a mere 992 page impressions. I used to get that per day! The income per site was $170 & $31 from Adsense (I can't mention click through rates / cost per click - Google policies). And between them, they managed 2 failed bookings and 5 confirmed bookings, valued at £90.48 commission for me. So around £200 income in the month. Not fantastic, but it's something.

So that's where we start from. Hopefully in a few weeks I'll start to see the ratio of traffic change from it's current 5:1 to (maybe) the other way around and the incomes improve accordingly.

Tomorrow, what I am doing to achieve this.

Friday 9 May 2008

Why To Redesign When Website Traffic Tumbles

So what is the driving force behind me wanting to redesign a website that used to work perfectly well? Simple - it's not working so well now, and I 'blame' the Google duplicate content filter for its demise.

Blame might not be the right word, but you get the drift. So what happened?

Well the first holiday cottages website got loads of traffic, more than the second holiday cottages website one by about 7:1, if my memory is correct. That's when they were both at their peak.

Then they both dropped off and not much happened. With the second being newer, having less traffic and therefore being a lesser risk if anything went wrong, I decided to convert it from html to php and make a few changes along the way. Basically, the intention was to make it quicker to upload (using include files for the standard code which meant 4,000 smaller files to ftp...). But at the same time I added a few small tricks that allowed me to change the descriptions and re-write a lot of them.

I put this in, sat back and waited. Along came the search engines and after a few weeks, onces all of the pages were cached, the traffic increased. In fact last month it was 5 times the first site's traffic, although still not enough!

Confident that was the problem with the second, I did attempt a minor rewrite, but never having the time it was half put in and probably made the site worse.

So a few weeks ago I started again (I've frequently pick up the site after over a week of not working on it...). My idea is to produce a site that doesn't just show the affiliate content but also has plenty of unique content and is internally linked in such a way as to provide a network of information. Hopefully, something the search engines will like and send visitors to, and something visitors will be able to make use of and want to book through...

Thursday 8 May 2008

Why Redesign A Website?

So I explained yesterday that I have 2 cottages websites, built slightly differently from the same affiliate data feed. The original was getting far more traffic than the second, because of the keywords (presumably) that each targeted. So what happened?

At first, I expected 700 page impressions to one confirmed booking! Yikes, that's a lot, but that's based on the first year or so. Some months are better, some worse. So why is this?

Well, for a start it's difficult to track what's happening as the affiliate scheme doesn't provide tracking so I don't know which of the 2 sites gets each booking. Both are different styles, so one style could work well, the other could be failing. And the success changes throughout the year - as availability changes.

Now a lot of these people will be looking and then realising they are seeing cottages they have looked at elsewhere (other affiliates) or they could be looking and checking availability and finding the accommodation booked. It is frustrating that although holiday makers are wanting to book from the end of the year, the availability doesn't tend to go onto the database until the start of the year. So there's lots of lost enquiries there.

There's also the worry that with that low conversion rate people are seeing the site and thinking 'yuk' - quite possible in the original site, but the second looks much better. And I suppose that there is also the confidence factor - do you really trust the site enough to book?

But the main reason I am redesigning my site is that the traffic has fallen off dramatically recently. I'm certain I know why - more tomorrow...

Wednesday 7 May 2008

Redesigning a Website

I thought I'd take a break from the website marketing and talk through the process I'm currently going through with my Holiday Cottages website. It's my own site, that's been running for almost 3 years, and I'm about to put in a totally new version of the site. It seems a shame not to share what I'm going through and record the frustrations and hopefully successes of the site.

But what is the site and why am I redesigning it?

Well, it started off around 3 years ago - just over a year after I set up my webdesign business. Things were quiet and I was looking for other avenues to supplement the income. It had never been my intention to completely run the business as webdesign - I don't like the eggs in one basket situation.

I'd fallen quickly into affiliate marketing - starting off with the Mortgage Rates website. Now there's another site in need of plenty of TLC! That started as a venture between me and a mortgage broker and I'd accidentally discovered affiliate selling and started trialling banners to supplement the income. 4 months later I discovered Adsense and tried that (very successfully) as well. At the time the site was getting a lot of money spent on it in Adwords, so the income was well received.

Then one day I discovered an affiliate scheme for holiday cottages and decided to give it a go. I was able to take a feed of all of their properties, create almost 4,000 pages of website using Perl and then FTP the site to my hosts.

But it takes ages to load that - every month!

After about 4 months the site had taken off, it was getting reasonable traffic (for a site that only needed me to initiate the monthly rebuild) and was earning £400 per month plus on commissions plus Adsense income.

So I built a second site, hoping that would be as successful, but it got about 15% of the traffic (it was written differently to cover different keywords). But still, the bookings went well for a few months.

Then the traffic dropped off, the original site became grey barred and I was too busy with customer's sites to do much else. Total income between the sites is around £200 per month, which is still a good return for the monthly effort (or lack of effort).

But was was wrong, what could I do and what have I tried? Read on...

Tuesday 6 May 2008

Page Rank - Doesn't Always Go As Expected

I've been waffling a lot about page rank recently and I've noticed that the more I try to work it out, the less I know!

The theory goes that page rank is inherited from the linking in pages. Take a customer's PR5 website, you would expect all of the pages linked to from the home page to be PR4 - some are, some aren't.

There are 2 pages on the site that are only linked to from 1 other page. That other page is grey barred. So what PR do we expect for them? Well only 1 link each from a grey barred page. Experts will tell you they also will be grey barred. Actually, they are both PR3.

Take also the links directory of that site. Every page points to the resources page - which is grey barred. It in turn points to the actual links pages, which are inter linked but not linked to by the rest of the site. The only way to find these pages are by clicking through the grey barred top level page.

So do we expect the whole links directory to be grey barred? Of course. A lot of pages are - but then they are 'new' so that's right. But there are dozens of links pages that are PR3.

The top level page just has the standard text that's on every links page. The welcome, actual directory etc. So remove this duplicate content and it's an empty page. Somehow, Google has decided to jump that page and give the PR straight to the actual directory pages.

It's looking like to get a good PR you need at least a couple of paragraphs of unique content on the page and linking somewhere above the page from a good PR page. If the content is too similar then it gets ignored. Possibly there's something about other information linked to from that page - I'll have to explore more with my mortgages site that started this off and see where it leads me,

Monday 5 May 2008

Using Page Rank

So what precisely is page rank used for and what can we learn from this?

Take a search on my business name - Janric. It returns (today) 21,200 results. Not only is my own website listed up there, but also professional directories that I'm listed in, Suduko puzzles (???), another business with the same name in another country and many, many more, including some of my customers' sites where Janric is mentioned.

Even typing Janric Web Design gets almost 5,000 pages - and this page will also be added to that list soon! So, what are the most relevant results to my search?

Well I'm glad to say that in both cases my home page is top. You would hope so in the second case, but when it's just Janric alone, maybe that's not so much to be expected.

Well what has had to happen is that Google has examined the page and links into the pages and seen that Janric is relevant to them. It's used its on page and off page factors to decide to list my site highly, then if there has been any doubt, the PR4 of the home page has put it to the top.

But it's not the highest PR page with the words Janric on it. A customer's site is PR5 and that has Webdesign by Janric on the bottom of every page. And it doesn't appear at all towards the top of the search results. Google has correctly ordered the results. It is interesting to note though that the link text is standalone - not in a paragraph. A customer's site with a lower PR that has webdesign by janric in a copyright statement at the bottom does appear higher up. So a link within a short paragraph is showing more weighting than a link on it's own from a much higher PR page. Makes you wonder if link directories are worth the effort...

But this example shows that page rank alone is not enough to get you listed top on Google - other factors are important. Only when everything else is equal does PR matter.

Sunday 4 May 2008

What is Google Page Rank?

So just what exactly is Google's page rank? I've previosuly explained it in terms of what the values can mean, but how is it calculated and what is it used for?

Without getting into the full maths, an overview of page rank goes like this. All search engines want to provide the most relevant results to their users. If the results are good, internet users stick with the search engine. They then do more searches and are more likely to be about to click the adds at the side of the results.

Google, and other search engines, try to find which pages are most relevant to your search terms. They can have a quick stab at how relevant the page is based on content, context etc. But when there are hundreds of thousands of results, they will have lots of pages with 'equal' importance. It is at this point that Google use page rank.

This is the official story according to an article I was reading a couple of days ago. But, although it is backed up by the fact that results aren't listed in page rank order, there are exceptions where page rank outweighs the results of the on page factors.

Try the famous 'click here' search and the first result or two don't have either words on them - that's the power of incoming links. So maybe there should be considered to be 3 factors:

1 - the page content
2 - the incoming links pointing to that page
3 - where the above are equal, the page rank

That could explain all of the above and what we see in results. There are many times when I click on the cached version of a page and see a comment that a certain keyword was only found in links pointing to that page, not on the page itself. This impliess that incoming links have a good amount of weight - probably equal to the content itself.

Why, well we'll look more into Page Rank tomorrow to see. I like to keep the posts short, simple and to the point!

Saturday 3 May 2008

Don't be the end of the web

It has long been said that search engines like pages with internal links - it implies that the page gives an overview, leading to more detailed information. And I've been reviewing my own website in the light of the recent page rank update.

I noticed that on the whole, there's an apparently random scattering of PR3 and grey barred pages, with the home page PR4 - why??? I've also often mentioned customers who complain they are not top of google on a 1 page website.

Well it was interesting when I noticed that the majority of pages that were page ranked had links to other pages in the main content. Not just the menus and other side bars.

What were the exceptions? Well there's a contact page and (OK, very naughty!!!) a landing page. What's special about these to break the rules - well they both display phone numbers - in the content.

I also mentioned yesterday that the portfolio page had gone from grey barred to PR3 in this update. What had I changed?

Well, at the time of the last update it listed 55 - 60 other websites and linked to them. Now, it links to internal pages, each of which points to the customer's website. So this has moved to the theory of giving more detail.

I'm starting to look over other sites of mine and there's a pattern forming - more information as I uncover it! But it looks like the 'perfect page' on Google has plenty of original content and some internal links or contact information in the page content.

Very interesting! Question is - how far does this need to be pushed to work?

Friday 2 May 2008

Google Page Rank Update

There appears to have been a page rank update in the last few days, possibly like the February Page Rank Update, it was released for the end of the month.

In previous updates I've been able to look back and see that posts over a certain age have a PR0 within this blog, but that doesn't seem to be the case this time around - all are showing as new (grey barred). My mortgage rates site seems to have stayed the same, whilst my own webdesign site has recovered the 1 page rank it dropped in February, and it's back to PR4.

I would suggest that it's homw pages that have changed, but my portfolio page was grey barred after the February update, but it's recovered to a 3, so that's not the case. Maybe it's more of a case of sandboxing new pages. Strangely, I have a discovered that one page showing a testimonial is PR3, whilst others that are older and younger and all linked to from the same places in the site are grey barred.

Also good news is that this blog has increased from PR3 to PR4. Only 8 months' of posting and not really knowing what I was going to do at the start and it's slowly creeping up. Other blogs of mine haven't had this success.

As I discover other strange details of the page rank update I'll record them here in the usual way.

Thursday 1 May 2008

Are all links directories equal?

Are all links directories equal? Is it worth getting links on every links directory going or are some worth more than others?

Well, hopefully fairly obviously, some links directory pages are worth more than others. Depending on the tool used there are a lot of variations.

Some people still insist on throwing out links pages with hundreds of links with just the link text, no content. Given that the popularity given to each linked to page is spread between the linked pages, the more links the less popularity you are gaining.

Also, if the page is just links and no text, then this must raise suspicions with search engines. There are also some written in HTML and others with filenames such as links.asp?page=123&theme=456&skip=789 etc. With too many parameters you will loose the search engines. I see no difference between PHP, HTML and ASP pages, in fact some of the links pages that I've seen the best results from getting listed in have been ASP with a single parameter.

So what am I saying? Is it best to review the page and decide whether to exchange? It can be. If you are working on getting a new site listed then you may still be at the point of anything is better than nothing. But later you might be choosy.

Importantly, if you are deciding which tool to use then look and see how good it's links pages are. If it gives bad pages then less people might want to swap with you and the same as the popularity vote out counts for less, it may also count against your site having a poor links software.