Continuing Professional Development – does your company do enough for you?

This is something that has bothered me a lot over the last few years and the last few companies. Most of my original classmates who specialised in Civil or Mechanical Engineering take CPD as a given. Whatever company they went to would be expected to provide CPD for them in a structured way, and to support them in their progress towards the C.Eng accreditation. The company would be expected to provide mentoring, and to send employees on accredited CPD courses (like those run by the IEI).

But it wasn’t just civil and mechanical engineers, it was in all the professions. Barristers (since 2005) are required to undertake CPD work. Doctors (since 2007) are required to undertake 50 hours a year (or 250 over 5 years) of CPD work. Teachers are going to make it compulsary in the near future. Accountants have to do it. Auctioneers do it.

So what about computer engineers and programmers and the IT sector as a whole, Ireland’s second-largest industry and the claimed saviour of our entire economy, an industry characterised by continual change and with the shortest period to technical obsolescence of any of our industries, where CPD is so obviously needed?

Personally, I can say that none of the companies I ever worked for (and granted, I’ve not racked up several decades of wide-ranging experience yet) were involved in a structured way with CPD. In fact, none of them ever mentioned it at all. The closest I saw in the last seven or eight years was when one company (after much lobbying by the coding team) begrudgingly agreed to buy some reference books from amazon.com which the coders could use (and take home if they signed for it, and it was someone’s job to track who had what book). It was pitiful – they were using time from one coder (a commodity they usually charged clients around €100-200 per hour for) in order to track which of their employees had a book worth maybe €30, which they were trying to use to improve their skills (something the company would only benefit from).

At best, in companies like that, CPD is an individual responsibility. Courses, conferences, seminars – they have to happen on your time, whether it be a weekend or a holiday. Books, admission fees, subscription fees, they become a living expense shared only by others in our field. And given our industry’s counterproductive love for masochistically long hours, you’re talking about working ten hours a day during a slow week, then trying to grab an hour here or there to read any CPD material you can, and that’s never light reading. Small wonder then, that as far back as Peopleware, it’s been known that the industry average for CPD work in IT is not even one book. Not one book a month or one book a year, but not one book, ever. In fact, just by reading this blog, you’re one of the technical elite (not that this blog is special — if you’re reading any blog on programming, you’re one of an elite group in our industry).

Worse yet, in several of the places I worked in in the last few years, asking for CPD support would have been a black mark against you; it would have been seen as an admission of incompetence and nothing more. The attitude was, effectively, that you should have learnt everything in college, and now it was time to stop with the time-wasting of learning and get on with billable hours. CPD was something you did at home and didn’t mention at work. Supporting CPD in those places was seen as the company spending money to improve the employees’ CVs so they could flee elsewhere. Oddly enough, not supporting CPD (and generally treating employees like second class citizens) often prompted that flight, leaving the company to scramble to replace the loss of expertise and data that such a moving-on represented to them.

The really depressing part of all of this is that studies have shown that CPD benefits the company dramatically. It’s well-known. Even Fred Brook’s classic, No Silver Bullet, mentions CPD mentoring as a vital step in finding great developers. It’s a primary difference between the top 100 companies in a field and the field as a whole, when done properly and assessed correctly. But in Ireland, only 10% of companies are involved in CPD to a high level, and only 40% even get involved at any level at all. Looking at the IEI’s list of participating organisations in its CPD programme is telling – in the Tech section of the list there are only nine companies (out of a total of 94) and most of those are large multinationals (BT, IBM, Intel and so on). Of our native SME sector, there are, basicly, none.

Nor are there many courses for the IT sector in the IEI’s lineup of CPD courses. There are non-technical courses in common with other sectors of course – Project Management, Communications and so forth; but for technical courses there’s only one, on iPhone apps.

Nor is there much in the way of third-level support, at least from my limited vantage point. I certainly never encountered any mention of CPD during my undergrad degree, nor the C.Eng qualification. Some universities like DIT are now running CPD courses with the IEI, so hopefully this is changing.

But the companies are where this all has to start. Why do we never see recruitment ads looking for specific CPD accreditations? Why is there such poor support for the C.Eng qualification? Why do so few small shops go for the IEI CPD Accredited Employer standard? As I said on this thread on theboards.ie Development forum, if a company is not willing to take on CPD in a proper manner, it has no business complaining about the standard of potential hires, because it is part of the problem. And a critical part, at that.

I think myself that in the startup sector of our industry especially, this is a side-effect of the buy-in to the cult of the ‘rock star developer’. Watch TWISt sometime, especially the DHH interview, and ask yourself — when so much focus is put on being one of the top developers around — when we don’t have any objective way to measure how good someone in that field is, but rely instead on how often people are talking about them — would people this arrogant and unprofessional ever take part in a process like CPD which is based on the idea that you don’t know everything? Would anyone looking for a new role ever mention CPD to them? And how do we expect companies will treat us, when we publicly espouse such unprofessional viewpoints?

Smartphone data traffic eclipses Feature Phones but the iDevices are coming up fast…

Admob released their Mobile Metrics for February 2009-February 2010 a few days ago. The most interesting information there is well summarised in one graph:

Traffic Share by Handset Category, worldwide, from the Admob Mobile Metrics report February 2009 - February 2010
Traffic Share by Handset Category, worldwide, from the Admob Mobile Metrics report February 2009 - February 2010

Right there, in October last year, the smartphone finally eclipsed the feature phone. This is something that every data provider in the mobile sector has been screaming about for quite a while now – the upcoming mobile data ‘apocalypse’. A mere three or four smartphones can generate enough data to swamp an exchange from only a few years ago; the only reason networks like AT&T’s haven’t been falling over more often than they have been is a lot of fairly rapid work on the part of the technical teams in charge of the backhaul for their networks. But the rise in the demand for mobile data, as game-changing as it has been, is only just getting started, and this report points that out.

Ignore for a moment the swapping of places between the smartphone and the feature phone — look at the growth rates of demand for smartphones and the third category of device – the mobile internet device (currently this is predominantly – ie. 93% – the iPod Touch). This category’s demand for data from the Mobile Network Operators has seen growth of almost 400% compared to the 193% of smartphones. But surely that has to top out, right? What could possibly maintain that level of growth?

iPad

Yup. When the iPad debuts, it’s going to be in this sector. And it’s a content consumption device almost by default – newspapers, youtube, you name it. Granted, only the more expensive model has 3G, but you know that’s not going to last – Apple has a pattern with their hardware which tells us that however slick the iPad is today, it’s only going to be refined and become more compelling as a device. And meanwhile the iPad clones like the JooJoo which will get to customers even before the iPad, will only add to the increase in growth rate that the iPad is going to drive.

And LTE isn’t going to save things. Ericsson’s latest figures indicate a 1000-fold increase in over-the-air capacity is needed and LTE will only offer around a 10-fold increase. To make up the 1000-fold, plans include introducing LTE in combination with taking over more spectrum, building nearly ten times as many basestations for cell towers as exist today, and three or four other impossible things before breakfast. And even if the MNOs can pull all that off, you still have to have backhaul to attach to that over-the-air network. But it cost $16 billion for last year’s backhaul in the US alone.

The pressure just got turned up a notch on the data teams in MNOs…

Nagios notifications via clickatell

Nagios SMS AlertThe most robust solution for sending SMS notifications of server issues detected by Nagios is well-known – plug a GSM modem (or a mobile phone) into your server directly and use that as the delivery mechanism. It’s true out-of-band communications, so if the backhaul fails completely, you still get notified.

However, for someone who has a small server like mine, in a geographically remote location (like Germany, in my case), that may be overkill. Especially as the cost includes both the initial cost of the hardware and the ongoing service charges. And for true out-of-band notification, you need far more admin time than may be appropriate for a small site; you have to have scheduled test of out-of-band notifications lest the hardware fail without detection, and so forth.

In my case, the requirement is far lower-spec; I just want to get a quick ping if the server starts to die so I can log in and try to effect repairs. If it’s an actual hardware failure, or an actual backhaul failure, odds are good that I can’t fix it from my end in a few minutes anyway, and we’re past the “fix it” part of the disaster recovery plan (you do have one, right?), and into the “get another temporary webhost and redirect using DNS” part of the DR plan (you do keep regular backups of your site locally, right?).

So my solution is equally lower-spec. I got an account with clickatell.com (there are several other web SMS gateways, I chose clickatell.com because I’m most familiar with them, not because of any objective evaluation) and enabled the HTTP/S API on the account and then wrote the following additions to the Nagios /etc/nagios3/commands.cfg file (again, this is for Debian, your file location may vary…):

[cc lang=”bash” escaped=”true”]define command{
command_name host_notify_with_sms
command_line wget “http://api.clickatell.com/http/sendmsg?user=INSERT_USERNAME_HERE&password=INSERT_PASSWORD_HERE&api_id=INSERT_API_ID_HERE&to=$CONTACTPAGER$&text=’$NOTIFICATIONTYPE$ Server is $HOSTSTATE$ ($HOSTOUTPUT$) @ $LONGDATETIME$'”
}

define command{
command_name service_notify_with_sms
command_line wget “http://api.clickatell.com/http/sendmsg?user=INSERT_USERNAME_HERE&password=INSERT_PASSWORD_HERE&api_id=INSERT_API_ID_HERE&to=$CONTACTPAGER$&text=’$NOTIFICATIONTYPE$ Server : $SERVICEDESC$ is $SERVICESTATE$ @ $LONGDATETIME$'”
}[/cc]

Then tell Nagios to use these new commands in /etc/nagios3/conf.d/contacts_nagios2.conf :

[cc lang=”ini”]define contact{

service_notification_commands   service_notify_with_sms notify-service-by-email
host_notification_commands      host_notify_with_sms notify-host-by-email

pager                           INSERT_MOBILE_PHONE_NUMBER_HERE
}
[/cc]

And now Nagios warnings and alerts are dispatched both by email (in a longer format) and by SMS. All very simply done, needing only wget to be installed. It’s easily testable as well – just call the wget line from the command line, like so:

[cc lang=”bash” escaped=”true”]wget “http://api.clickatell.com/http/sendmsg?user=INSERT_USERNAME_HERE&password=INSERT_PASSWORD_HERE&api_id=INSERT_API_ID_HERE&to=INSERT_MOBILE_PHONE_NUMBER_HERE&text=’This is a test message. If this was a real message, you would be panicking right now and desperately looking for a terminal.”[/cc]

Given the pricing structure in clickatell, it’s quite affordable – at the rate of normal usage, I should get a year’s coverage for around €50 or so, with capacity to spare. So it’s easy, cheap and testable. It’s not true out-of-band, but as 80% solutions go, it’s not too shabby.

Ben Nanonote with WiFi

Ben Nanonote

One of the reasons I love my Nokia e71 so much is that it’s a pretty decent example of convergence. Like the iPhone and others, it rolls so many features into one box that we’ve stopped calling these things mobile phones and started calling them mobile devices, almost without noticing. Heck, the ‘in-crowd’ just talks about ‘mobile’ as though the OED had recategorised that word from adjective to noun. It’s not so much linguistic arrogance as it is necessity – you have to go to science fiction or back to mythology to find examples of the kind of multifunction tool these devices have become and are still becoming.

The iPhone is without a doubt the poster boy for this, as its marketing is, ironically enough, pretty much founded on using it for things other than as an actual phone (and that’s why the iPad, daftly named as it is, will probably be a great success but not as great as its more dimunitive cousin. The whole attraction of the iPhone’s ability to be more than a phone is based on the fact that you are already carrying it around with you). One quick download and your phone becomes a translation device, a 2-D barcode reader, or any one of a few hundred other devices.

My problem is that I don’t really like the iPhone. It’s very slick and very pretty but… no background applications and a hefty price tag and to use it as intended, I pretty much have to have a mac. Sure, you can fake around that need, but it’s a chore. The Nokia e71 is wonderful in hardware (if you overlook the very poor camera which is hard pressed to handle the basic business task of recording the contents of a whiteboard after a brainstorming session — unforgivable given that mid-range phones handled this task better five years ago) but it’s awkward to setup with calendars and contacts and apps, even going through Ovi (which is why I’m still using a paper diary).

Once you decide against the iPhone and Nokia (and Blackberry because support for it in Ireland is again, all tied to one supplier and it’s not the best supported device here even though it’s huge in Asia), you’re pretty much left with the outliers right now, meaning Android. Yes, Android is an outlier. It gets great press without a doubt, but if you’re not a technology or gadget geek, it’s just another phone that’s a bit dingy-looking with its off-white case that doesn’t sit flat in a jacket pocket. Most people don’t know it’s a software platform, not a phone — and most of them wouldn’t understand what you meant if you told them (and amongst the real experts, btw, there are a few who don’t think much of it at all). And if you don’t mean Android, you’re right out there into the fringe at the moment. Which means stuff gets very interesting and individual indeed, which is where things like the 本 (běn) NanoNote come in:

Ben Nanonote from Qi Hardware

The Ben Nanonote looks like it might be a very interesting part of the fringe indeed. It’s small, but has a physical keyboard (humans like haptic interfaces for a good reason) and is completely open (both in hardware and software). Granted, it’s no speed demon – the iPhone ARM chips have a bit more oomph than it does – but even so, it could run a reasonably wide array of applications. It’s a long way from perfect, since it has no camera, no inbuilt wifi or inbuilt 3G or inbuilt WiMAX; but it’s intended as a first model and for a first model it’s got some promise.

Not least of which is that it costs around €70 at the moment. Add in the €60 you have to pay to get a supported microSD wifi card, and you’re still looking at less than a third of the cost of most netbooks over here. It’s a hobbyist platform rather than a serious do-work-on-this box at the moment, but looking at the upcoming Ya and Mu Nanonote platforms and seeing how building in wifi and other hardware is so possible, you have to ask the question of how long it’ll be until a commercial interest starts capitalising on the work Qi’s done here, and creates a larger market than just the hobbyist fringe. There’s a principle in open source software that the fastest way to change how something is done is to do it differently and release the code. Personally, I hope that trend holds true in hardware and we see a new market of palmtops acting as miniature netbooks; I would love to get a platform the size of the Nanonote, just with a few more networking options (as in, all of them – WiFi, WiMAX, 3G, LTE, the works). A true mobile device.

And yes, I still want an N900. If nothing else, it’d make a good stopgap measure 😀 In the meantime… well, €130 isn’t too much to drop to play with a toy like this, right?

No Redditting Allowed

No Redditing allowed

So after my little kerfuffle with Reddit, my account was reinstated and there was a (somewhat rowdy) discussion on the topic on Reddit. During that discussion, I learnt that:

  • The actual rules you have to agree to when posting on Reddit are, to say the least, a bit murky. The User Agreement says that Reddiquette actually are rules; the community thinks not. Since the User Agreement contradicts Reddiquette (and prohibits the majority of the published comments, a large portion of the subreddits and a hard-to-estimate fraction of all the posts on the site), but also says that Reddiquette are rules with the same legal weight as the User Agreement, it’s a confused situation.
  • The community are of the opinion that submitting your own content is spamming; but submitting the content of others is good behaviour. I am not of that opinion. I am opposed to that viewpoint, in fact, for two reasons:
    • If the submitter is financially benefitting from the submission, then to my mind it’s spam. That’s not the case with this blog. There are no advertisements, and no products or services for sale here. Never have been. I have no plans to introduce them. I cannot agree that original, real content, submitted without financial gain being sought or realised, is spam. It’s a daft position for Reddit to support.
    • If I write something here, the sole reward I get is knowing others thought something I wrote was worth reading. As such, I want to be the one who gets to post the link to it on sites like Reddit. I don’t want someone else to draft the headlines to my work. The idea that Reddit believes they have a de facto right to do so is abhorrent to me. And the idea that my submission of my work is frowned on, but others submitting my work sees them rewarded, that is deeply abhorrent.

I’m not saying “Reddit Sucks” or anything so silly. Reddit own their own site, they can do as they wish and they do very well for themselves and they seem quite happy. That’s cool. But my work is mine and I don’t agree with how they use it for the above reasons.

So I’m withdrawing my work from Reddit. If you navigate from there to here, you are redirected to this page. Redditors are still very welcome here. They can use the sidebar links to look through anything and everything I’ve posted, or come to the site directly; and I hope they find it of interest or worth a laugh, or that they find something in it they needed. But I’m no longer allowing links from Reddit to here, at least not until the above two notes no longer hold true.

Not-so-shortlisted!

Irish Blog Awards

I didn’t notice at the time, what with all the reddit fun, and then all the fun that happened on the server as a result of all the reddit fun, but @susan_lanigan kindly pointed out that I’ve been shortlisted in the Best Technology Blog section for the Irish Blog Awards. And, looking at the quality of the competition, I’m going to be insufferable for a little while that I’m considered to be in the same league, even if only after the first round of judging 🙂

Performance tuning a server in less than three minutes while being slashdotted

So you wrote a blog post about something that seemed fairly innocuous, but for whatever reason, it caught the attention of one of the major sites and now your server load is at 110 and climbing, the ssh command line session is taking thirty seconds to respond to anything at all, and given that your post is on the front page of slashdot at primetime, this doesn’t look like it’s a temporary blip. What do you do?

Burning computer

Okay, first things first. You don’t have time to do a proper fine-tuning session. You need a quick & dirty tuneup here. Proper fine tuning you leave till after the traffic spike, and you can then come back at it with a plan and decent tools like siege and so on – but this is the “fix it in three minutes” version. So if you see stuff here that looks crude to you, despair not, everything looks crude when you’re trying to do it in that kind of timeframe.

First, you have to be able to actually use the server command line. The load’s at 110 – that’s not coming down anytime soon and until it does you can’t do the rest of the work you need to since it takes so much time for even a shell to respond. The load is being caused by Apache and MySQL taking on more work than the server can handle, causing the server to swap excessively; and you’ve got to dump that work or shut off the incoming new work to recover. You can try sudo killall -9 apache2 if you can get an open ssh terminal in to do the job (and it’s the first thing you should try), but the odds are that that server has to be reset. Whether that means a phone call to the data centre, or a walk down the hall to the server room, or just clicking a button in a web interface, that’s the first thing to do. Don’t hold off, because unless everyone stops reading your page right now, that load’s not coming down.

Once the box has rebooted (and I mean immediately – sit there watching ping until it comes back), ssh in and shut down apache. MySQL is okay for now, but the work is coming from Apache, so that has to get shut down until everything’s ready to go again. You’re going to lose a few minutes of service, yes, but that’s recoverable from for a blog (and if this is for something more serious than a blog, you’re going to be in trouble for not properly spec’ing the hardware to begin with anyway, so limit the damage now and take your licks later).

At this point — whether you’ve just logged in or whether you managed to run killall successfully —  if it’s a debian server you should run sudo /etc/init.d/apache2 stop (did I mention I love debian for having scripts like that? No man-paging apachectl, just an easy to remember standard interface for every service on the box. Wonderful). It’ll tidy up from killall, or it’ll shut down apache cleanly, depending on how you got here.

I’m going to use my server as the example here, by the way, since it’s what got burned last night and it’s what prompted me to write this refresher — it’s been two or three years since I last had to maintain a server that was near its capacity, the experience was a bit of a flashback 🙂 So, some background on my server – I moved my blog from wordpress.com to here, on a Hetzner server. It’s their entry-level dedicated server offering 2Gb of RAM, a 64-bit Athlon, 160Gb of hard drive in a hardware RAID-1 array and a 1Gbit NIC) — all running Debian Lenny (and no, I’ve no relationship with Hetzner, they were just the cheapest of the places various friends recommended). WordPress is up to date on my server (2.9.2 at the time of writing), as is the Lenny install — if you don’t have the latest security fixes and such in place, or your WordPress is outdated, then that’s probably adding to your problem, but for a quick fix like this, that’s too big a job. Get through the traffic spike and deal with it later.

And yes, that server spec is overkill for my needs really – but I had a bunch of side projects like RangeClerk (don’t bother, not much is up yet) and the blog for Herself Indoors and her book and some other things I wanted to run as well that would be using wierd php and python modules and libraries and the like; and I just hate cpanel and not being able to install anything I wanted. Plus, it was cheap 😀

Right, back to it.

The first thing we need to do is to sort out MySQL’s configuration. Open up the my.cnf file, whereever you’ve put it (it’ll be /etc/mysql/my.cnf for a stock Debian install). We need to tweak just a few settings. First off, key_buffer. This is probably the most critical variable here because by default all the tables will be MyISAM tables (if they’re not, then this isn’t so critical). It’s set to about 16Mb by default; we’re going to turn that up quite a bit. On a dedicated database box, this would be set very high – anything up to 50% of the total available memory. In this case, with a full stack on the one box, we set it a bit lower since Apache’s going to want a lot of RAM too – 256Mb will do for a starting value.

Next we’re going to disable the InnoDB engine completely to cut down on MySQL’s footprint. Again, WordPress by default isn’t using it. Just ensure skip-innodb is either uncommented or inserted into my.cnf.

Lastly, we’re going to enable the query cache. The thing is, MySQL’s query cache is a fairly blunt instrument. If a query is precisely the same the second time it comes in, it’ll hit the cache – but any change at all, no matter how small, and it misses the cache. So it’s not as enormously useful as you’d first imagine. However, it does help, so we’ll increase its size modestly (48Mb of RAM is sufficient here). So our changes to my.cnf look like so:

[cc lang=”ini”]
key_buffer = 256M
query_cache_limit = 16M
query_cache_size = 48M
skip-innodb[/cc]

Once those changes are made, sudo /etc/init.d/mysql restart will get the MySQL server up and running with the new setup.  Once that’s done, let’s look to the next level in the stack – Apache. Under debian the config files are arranged differently than normal; the configuration changes we’ll make will be in /etc/apache2/apache2.conf but in other installations they would be in httpd.conf or elsewhere.

The default Apache install uses the prefork MPM setup – one thread per process. It’s older, slower, less efficient, but less buggy than the worker MPM which isn’t threadsafe. So find the prefork MPM config settings in apache2.conf. They should look like this in a default install:

[cc escaped=”true” lang=”apache”]
<IfModule mpm_prefork_module>
StartServers          5
MinSpareServers       5
MaxSpareServers      10
MaxClients          150
MaxRequestsPerChild   0
</IfModule>[/cc]

We’re going to cut down a lot on how much work Apache takes on at once here. Yes, some users will have to wait a few seconds to see your page – but right now, with the load at 110 and climbing, they could wait until their browser timed out and they’d never see anything. So we reduce slightly the number of servers that Apache will farm off to handle requests at any one time from 5 to 4; we’ll increase the number of spare servers it’ll keep around to hand off requests to (we want to reduce the overhead of starting and stopping those processes) from 10 to 12. We’ll set an upper limit on how many we can have though, and we’ll keep it to just under 100. This works on my system, which is an entry-level system; you might get away with more, but for now use these settings and it’ll get you up and running and you can increase a bit and check again as you go (and this guide really isn’t aimed at big sites anyway, just small ones like mine which were caught on the hop). We’re also going to ensure no apache process takes on too much at once by creating a limit of how many requests any process can take on – we’ll keep it low for now (3), but it can be increased later. So our changed config settings now look like this:

[cc escaped=”true” lang=”apache”]<IfModule mpm_prefork_module>
StartServers          4
MinSpareServers       4
MaxSpareServers       12
ServerLimit           96
MaxClients            96
MaxRequestsPerChild   3
</IfModule>
[/cc]

Okay. At this stage, you have two options. The first is to start Apache up again and get back to work. Odds are, this will hold up pretty well – but you want to keep a window open with htop running in the background to keep an eye on things (and mainly you’re watching the swap space usage and the load. The former’s critical, the latter indicative that a problem’s arising – if either go sideways, kill apache and edit apache2.conf setting even lower values for ServerLimit, MaxClients and MaxRequestsPerChild before restarting apache). If that’s your preferred option, skip to the end of this post.

However, if you want to take that extra step, we could install memcached quickly here. It’s a very effective load reducer and under debian, it’s far easier than you’d expect:

[cc lang=”bash”]sudo aptitude install build-essential php5-devel php-pear memcached[/cc]

And let that haul in whatever other libraries it needs, then:

[cc lang=”bash”]pecl install memcached[/cc]

And once that’s done, edit the php.ini file (in Debian, that’ll be /etc/php5/apache2/php.ini ) and insert this (anywhere in the file will do, but the extensions section is the tidiest):

[cc lang=”ini”]extension=memcache.so[/cc]

That should be memcached installed and running in a default configuration (we can finetune later). We now need to drop in the backend that WordPress uses to take advantage of memcached. Download object-cache.php and copy it into the wp-content directory of your website and change the permissions and ownership of the file:

[cc lang=”bash”]cd [insert your www/wp-content directory here]
sudo wget http://plugins.trac.wordpress.org/export/215933/memcached/trunk/object-cache.php
sudo chown www-data.www-data object-cache.php
sudo chmod 644 object-cache.php[/cc]

And that’s it done. Quick, dirty, and everything at default, but that’s a three-minute setup for you (well, maybe five if you do the memcached setup as well, and I am assuming you have a fast net connection for the aptitude step, but still).

Now, restart apache and everything should fire up with memcached caching a lot of requests and keeping the server load to a managable level.

[cc lang=”bash”]sudo /etc/init.d/apache2 force-reload
sudo /etc/init.d/apache2 restart[/cc]

And once that traffic spike is past… take the time to tune it properly!

Silently banned from Reddit…

Reddit Alien - Screw You Buddy

For almost two years now I’ve been reading Reddit and posting material there, mostly in Programming but in other subreddits too. I’ve built up positive karma points there, I’ve never broken the site rules, and I’ve been a fairly regular reader there. A few days ago I posted a review on the new headphones I bought to the audio and headphones subreddits, and there wasn’t a single blip on them – no comments, no votes, nothing. That’s rather rare for reddit, usually there’s something. And then I noticed in the statistics on the blog that I was seeing no referrals from there at all either, which is very odd. I looked at the site and sure enough, there are the posts, so what’s happening?

It’s at this point that I find that noone else can see the posts at all. And I log into my user page from a test browser which isn’t logged into my account and find that my userpage on reddit is giving a 404 error.

Reddit User Page 404 error

So my first reaction is “what the…?”, and I send a message to the admins of the site asking what’s broken. Twice, in fact.

I’m still waiting to hear back, five days later.

In the meantime, however, I find that this is apparently standard practice for Reddit — if you’re judged to be at fault, you’re just silently dropped. They call it the zero point ban and as ways of dealing with your userbase go, it’s probably one of the more cowardly I’ve encountered. Here’s how it goes:

  • Firstly, you don’t document all the rules. Sure, there are basic site rules and FAQs and “Redditiquette”, but you don’t put everything in there.
  • Then, you let anyone at all report anyone else, without having to make a case beyond the initial accusation.
  • Then, you don’t let anyone know that they’ve been reported for something.
  • You make sure that users past records aren’t taken into account – so that even when your karma is good, it doesn’t matter.
  • Then you don’t let anyone know they’re being judged.
  • Then you don’t let anyone know if they’re found wanting.
  • Then you don’t tell them they’re being banned.

So, I have to go digging back through the spam reporting sub-reddit (which I didn’t even know existed this time a week ago) to find I had been accused at all, and since your username isn’t actually instantly visible (it’s in the link, so you have to run your mouse over it while watching the status bar), that’s not a trivial task when someone reports someone else every few minutes:

reddit - Reported For Spamming

Once I found that, I thought “Well, okay, at least I can argue my case here so”. Except, no, any comment you make here doesn’t come up because you’re banned by this point. Maybe if you’d been here earlier (as in 19 hours ago when I first found this) you could have argued your case – but because you didn’t get told you were up for review (you weren’t even downvoted by the person who reported you), you didn’t know you were being judged and now you’re right out of luck. So you can comment, and the system even notes that you have commented, but noone else can see your comment.

I might not even be all that annoyed, to be honest, but when you find that your accuser has been less active on the site than you, that he’s built up less karma than you have, that he’s never contributed anything original to the site and has only posted 26 links when you’ve posted over 260 (including some which were in the reddit.com top ten and which have tens to hundreds of thousands of hits) — well, that rankles. It’s just plain wrong.

Reddit karma pointsReddit accuser karma points

Granted, you couldn’t arrange things so that you must have more karma points than someone to accuse them of something, that just wouldn’t be fair either, but surely it’s only right to let people know they’ve been accused of something, right?

As a way of interacting with your users go, this is not just sub-par, it’s downright sneaky. And the cynic in me thinks that the fact that this requires a minimum amount of work by the admins, and that it still generates site traffic and ad clicks, are motives for either not fixing this, or designing it this way from the get-go, and that’s even more sneaky.

So now what?

Well, in the greatest traditions of websites everywhere, IYDLIGTFO. Between Digg, StumbleUpon, Mixx, Propellor, Diigo, HackerNews, DZone, Buzz and others, it’s not like there isn’t any competition or alternatives aplenty. And if this is how Reddit treats its users, it’s not someplace to hang out anymore.

Censored Reddit Alien

Suura over coffee

Filmed before Mobile World Congress:

Suura.com have developed patent-pending technology to silently authenticate 3G subscribers onto existing WiFi networks, without modification to those networks, creating the opportunity for 3G data subscribers to roam, without intervention, onto more cost efficient WiFi networks. Suura also meters the use, so the WiFi provider can be compensated for the offload.

Suura brings the promise of low-cost WiFi access through public hotspots. The system is targeted towards users that wish to surf the Web or make VoIP calls while they are roaming. Suura has the ability to seamless authenticate mobile devices through WiFi hotsposts using a single click mechanism and works on all the major mobile platforms.

Contact john.whelan@tcd.ie for more information or call +353-1-896-3269

Stochastic Geometry is Stephen Fry proof thanks to caching by WP Super Cache

%d bloggers like this: