Wankers

Yes: wankers. Wankers! WANKERS! WANKERS! __WANKERS!__

Who am I talking about? Managed.com, of course, the

company to which I give good money each month to host this site.

What happened?

Well, managed.com decided to move its network from California to New Jersey.

At least, that’s as much as they told us, the paying customers.

In preparation for this, they sent all of their customers an e-mail asking to

be supplied with the customer’s root password via plain text e-mail. For those

of you who aren’t in the field of computer system and network administration,

let me state that this is a violation of one of the most basic and universally

lauded principles of the profession: never, under any circumstances, send

passwords in the clear.

And yet, my hosting provider was asking me to do just this. In hindsight, that

should have been enough to spur me into action. I should have found another

hosting provider, right there and then, and moved my data prior to the

migration. But I decided to wait until after the migration to seek a better

provider. As always, laziness, compounded by a failure to recognise the

urgency of the situation, won out.

Anyway, managed.com were supposed to back-up their customers’ data, firstly

with a full back-up and then, shortly before the migration, with a further

incremental back-up. The migration was supposed to be barely noticeable, with

a guaranteed maximum of two hours of downtime.

I was sceptical, but kept my fingers crossed.

Can you believe that managed.com didn’t tell its customers in their

notification e-mail when this migration would actually take place? We were

left to guess. E-mails to them on the subject went unanswered, as did requests

for a secure channel through which to supply one’s root password.

When I noticed one day that my machine had been rebooted without my

permission, I incorrectly assumed the migration had already taken place. If

I’d known at that time that things would be moving to New Jersey, not just

around the corner in California, I could have run a traceroute and seen that

my machine had not actually gone anywhere. At that time, however, I thought

they were just moving locally. What else could I think? Managed.com had told

me virtually nothing in their e-mail.

caliban.org mysteriously went off the network on 9th May. It remained

inaccessible for almost three days. So much for the two hours of guaranteed

downtime.

All of my e-mails to managed.com went unanswered in this period. Only when I

threatened them with legal action (a trick I picked up in America), did they

finally respond by rebooting the machine and getting it back on-line.

Naïvely, I thought that would be an end to my problems. Yes, that was

very naïve of me.

You see, managed.com restored my service from a week old back-up. I’ve no idea

what happened to the promised incremental back-up. It was probably never made

and, even if it was, it would have had to be of the last week’s worth of data,

not just the day before the migration. I suspect it was never even made,

however.

The net effect? I found I was missing a week’s worth of e-mail, multiple DNS

changes had been lost, the last week’s worth of blog entries had effectively

never been written, and sundry other less serious issues now needed to be

fixed, such as recent software updates becoming undone.

More e-mails to managed.com went unanswered. Due to an oversight on my part,

my own off-site back-ups had not taken place in recent times, so I had no

private back-up from which I could recover my data. Typical.

I began work on the system to repair the damage my hosting provider had done

to it, but before I could achieve very much, the system went down again. The

system was off-line again for more than a day. Once again, e-mail threats were

required to get it back on-line.

So what’s going on?

Exploration of my system’s log messages shows that the new hardware on which

my data resides is not the same as the old. For one thing, the system has a

different Ethernet card. Now, either that card is flaky or the Linux driver

for it is, because the system regularly gives up the ghost and all but

crashes: TCP connections to open ports hang without response; processes can no

longer be forked; even syslogging stops.

Yet, even if the new hardware had presented no problems, it’s inconceivable

that a company would move a working Linux (or any other) system to new

hardware and just expect it to work. What if I had not had the driver for the

new network card compiled for my kernel? My machine would have had absolutely

no way of ever getting back onto the network after the migration. It’s sheer

luck that I can sometimes still log into my machine and that it’s not

completely dead to the world.

So, the networking on the new hardware is extremely unreliable. rsyncs

regularly fail with checksum errors. The more network traffic one pumps over

the interface, the more such errors occur. Eventually, the system becomes

unstable and eventually unreachable.

It’s also possible that the machine has bad RAM or ineffective cooling, either

at the CPU or the data centre level. Witness these messages, culled from my

log in a rare moment of accessibility.

May 15 06:39:58 ulysses CPU0: Temperature above threshold

May 15 06:39:58 ulysses CPU0: Running in modulated clock mode

The system is now on heavy-duty medication: cold reboots, at first twice

daily, but that proved inadequate, so cron now reboots the machine every hour.

That’s the only way to avoid the machine locking up completely, which then

puts me at the mercy of managed.com to reboot it. That’s something that now

seems to take more than 24 hours to accomplish.

Clearly, this appalling state of affairs can’t be allowed to continue, so I’m

already on the look-out for alternative hosting providers.

A year ago, when I selected this company to host my services, people seemed

happy with it. I, too, was happy with the service until earlier this year. In

the last couple of months, however, things have been going downhill, which is

never a good portent for the future. Nevertheless, I was not prepared for what

has now befallen me. These people are lacking even the most basic system

administration skills.

So, what happened? Well, a little research shows that managed.com is not

really performing a migration. The hard drives and the data have moved to the

other side of the country, yes, but not because managed.com is doing it. No,

managed.com has been sold, you see? My data now turns out to be at the mercy

of Web Host Plus, so the current disaster is

actually largely due to their mismanagement and incompetence.

In fact, it turns out that a great many people are in a [similar or even worse

state](http://www.webhostingtalk.com/showthread.php?t=508358), thanks to this

bunch of clowns.

Sixty-three

pages of utter misery and appalling professional disregard of one’s customers

come to light.

Anyway, to say that I am in the market for a new hosting provider is an

understatement. If you have any recommendations, I’d be glad to hear them.

Ideally, they should not be located in the US, due to that country’s Draconian

legal stance with regard to privacy.

Thanks to Google, I was able to rescue the missing

blog entries from the Google

cache. I had to add back

the article comments by hand, which caused the loss of the original time of

entry, but at least the text of the article itself has been recovered.

The week of missing e-mail, on the other hand, is simply gone. Calls to

Web Host Plus to make available the missing incremental back-up simply fall on

deaf ears.

I’m utterly appalled to experience first-hand how this company has lost my

data and now ignores my complaints. I’m left bewildered as to the precise

ratio of incompetence to deliberate professional disregard, but I am 100% sure

that I have to get my data away from this bunch of wankers as soon as I

possibly can.

Until that time, expect the server to be up and down like a yo-yo.

This entry was posted in System Administration, This Site. Bookmark the permalink.

2 Responses to Wankers

  1. Bas Scheffers says:

    I’ve had (or heard of) nothing but trouble from any managed hosting provider. We had one with Rackspace.com. 3 times the thing died and 3 times they decided to bring it up with a stock sendmail config that bounced anything the backup MX pumped to it.

    My advice? Get a nice little 1U Dell or IBM server and stick it in a datacenter in Amsterdam that you have 24/7 access to. Make sure they provide you with a terminal server you can SSH into and log into your box over a serial connection with. Ideally, they also provide remote powercycles this way.

    It’s a higher up-front cost, but it saves you in the long run. You also get to control the hardware, meaning you can have little luxuries like mirrored disks at the price of, dare I say it, an extra disk, instead of paying $50/month more…

  2. I was looking at Rackspace.com until your testimonial.

    The obvious solution is to go with XS4ALL, but they’re very expensive. I’m sure the level of service is excellent, but I just don’t feel that the dedicated hosting of my domain should cost a lot of money. It’s such a simple need.

Leave a Reply

Your email address will not be published. Required fields are marked *