02
Nov 17

fork()ing and fstat()ing in JRuby using FFI on linux

Sometimes, $DAYJOB can get kindof technical. For reasons I won’t go into here because NDA, the following axioms are true for this puzzle:

  • we have to work in JRuby
  • we are in a plugin within a larger framework providing a service
  • we have to restart the entire service
  • we don’t have a programmatic way to do so
  • we don’t want to rely on external artifacts and cron

Now, this isn’t the initial framing set of axioms you understand; this is what we’re facing into after a few weeks of trying everything else first.

So; obvious solution, system('/etc/init.d/ourService restart').
Except that JRuby doesn’t do system(). Or fork(), exec(), daemon(), or indeed any kind of process duplication I could find. Oh-kay, so we can write to a file, have a cronjob watch for the file and restart the service and delete the file if it finds it. Except that for Reasons (again, NDA), that’s not possible because we can’t rely on having access to cron on all platforms.

Okay. Can we cheat?

Well, yes… allegedly. We can use the Foreign Function Interface to bind to libc and access the functions behind JRuby’s back.

require 'ffi'

module Exec
   extend FFI::Library

   attach_function :my_exec, :execl, [:string, :string, :varargs], :int
   attach_function :fork, [], :int
end

vim1 = '/usr/bin/vim'
vim2 = 'vim'
if Exec.fork == 0
   Exec.my_exec vim1, vim2, :pointer, nil
end

Process.waitall

Of course, I’m intending to kill the thing that fires this off, so a little more care is needed. For a start, it’s not vim I’m playing with. So…

module LibC
   extend FFI::Library

   ffi_lib FFI::Library::LIBC

   # Timespec struct datatype
      class Timespec < FFI::Struct
      layout :tv_sec, :time_t,
      :tv_nsec, :long
   end

   # stat struct datatype
   # (see /usr/include/sys/stat.h and /usr/include/bits/stat.h)
   class Stat < FFI::Struct
      layout :st_dev, :dev_t,
             :st_ino, :ino_t,
             :st_nlink, :nlink_t,
             :st_mode, :mode_t,
             :st_uid, :uid_t,
             :st_gid, :gid_t,
             :__pad0, :int,
             :st_rdev, :dev_t,
             :st_size, :off_t,
             :st_blksize, :long,
             :st_blocks, :long,
             :st_atimespec, LibC::Timespec,
             :st_mtimespec, LibC::Timespec,
             :st_ctimespec, LibC::Timespec,
             :__unused0, :long,
             :__unused1, :long,
             :__unused2, :long,
             :__unused3, :long,
             :__unused4, :long
   end

   # Filetype mask
   S_IFMT = 0o170000

   # File types.
   S_IFIFO = 0o010000
   S_IFCHR = 0o020000
   S_IFDIR = 0o040000
   S_IFBLK = 0o060000
   S_IFREG = 0o100000
   S_IFLNK = 0o120000
   S_IFSOCK = 0o140000

   attach_function :getpid, [], :pid_t
   attach_function :setsid, [], :pid_t
   attach_function :fork, [], :int
   attach_function :execl, [:string, :string, :string, :varargs], :int
   attach_function :chdir, [:string], :int
   attach_function :close, [:int], :int
   attach_function :fstat, :__fxstat, [:int, :int, :pointer], :int
end

So that’s bound a bunch of libc functions for use in JRuby. But why __fxstat() instead of fstat()? Interesting detail; the stat() function family aren’t in libc, at least not on most modern linux platforms. They’re in a small static library (libc_unshared.a in centOS). There’s usually a linker directive that makes that transparent but here we’re acting behind the scenes so we don’t get that niceity so we directly access the underlying xstat() functions instead.

I need to close some network ports (or the restart goes badly because the child process inherits the ports’ file descriptors and someone didn’t set them to close on exec()). A small helper function is useful here:

# Helper function to check if a file descriptor is a socket or not
def socket?(fd)
   # data structure to hold the stat_t data
   stat = LibC::Stat.new

   # JRuby's IO object types can't seem get a grip on fd's inherited from
   # another process correctly in a forked child process so we have
   # to FFI out to libc.
   rc = LibC.fstat(0, fd, stat.pointer)
   if rc == -1
      errno = FFI::LastError.error
      false
   else
      # Now we do some bit twiddling. In Octal, no less.
      filetype = stat[:st_mode] & LibC::S_IFMT

      if filetype == LibC::S_IFSOCK
         true
      else
         false
      end
   end
rescue => e
   false
end

And now the actual restart function itself:

def restart
   pid = LibC.getpid
   rc = LibC.chdir('/')
   if rc == -1
      errno = FFI::LastError.error
      return errno
   end

   # close any open network sockets so the restart doesn't hang
   fds = Dir.entries("/proc/#{pid}/fd")
   fds.each do |fd|
      # skip . and .. which we pick up because of the /proc approach to
      # getting the list of file descriptors
      next if fd.to_i.zero?

      # skip any non-network socket file descriptors as they're not going to
      # cause us any issues and leaving them lets us log a little longer.
      next unless socket?(fd.to_i)

      # JRuby's IO objects can't get a handle on these fd's for some reason,
      # possibly because we're in a child process. So we use libc's close()
      rc = LibC.close(fd.to_i)
      next if rc.zero?
      errno = FFI::LastError.error
      return errno
   end

   # We're now ready to fork and restart the service
   rc = LibC.fork
   if rc == -1
      # If fork() failed we're probably in a world of hurt
      errno = FFI::LastError.error
      return errno
   elsif rc.zero?
      # We are now the daemon. We can't hang about (thanks to 
      # JRuby's un-thread-safe nature) so we immediately swap out our 
      # process image with that of the service restart script. 
      # This marks the end of execution of this thread and there is no return.
      LibC.execl '/etc/init.d/ourService', 'ourService', 'restart', :pointer, nil
   end
rescue => e
# Handle errors here (removed for clarity)
end

An interesting problem to solve, this one. And by “interesting” I mean “similar to learning how to pull teeth while only able to access the mouth via the nose”. But in case it’s of use to someone…


23
Nov 12

Using VimWiki as a distributed, encrypted lab notebook for programming

So what’s the most important coding tool for a software engineer in the long-term? There’s a pretty decent argument that it’s a lab notebook. I’ve kept one for years and it’s one of the more useful tools I have, but I found that my notebook, apart from not quite measuring up to how a lab notebook should be kept, wasn’t quite as useful as it could be because when you have five or six notebooks covering a bit over a decade’s worth of notes and no search function, then how do you use them daily?

I’ve been looking around for a while for a software version of a paper notebook without much success (this isn’t ludditism – we just don’t have very much that can match what paper and pen can do for this kind of task). I’ve used a simple text editor on a cron job for a few years as a way to track what I do during the day – every 30 minutes, up pops a vim window with a ready-inserted line listing the date and time, I scribble in what I’ve done since the last time the window popped up and I save and quit – it takes about 15 seconds and it gives me a file listing everything I’ve done at the end of the day/week/month/year which is handy for things like preparing reports, doing job reviews, that sort of thing. It’s simplicity itself to set up, just call a simple script using a simpler cronjob entry:

[cc escaped=”true” lang=”bash”]

#!/bin/bash

# Cron script for 30-min activity journal
#——————————————-

export DISPLAY=:0
echo -n -e “\n[” `date` “] :\n” >> ~/.journal
/usr/bin/gvim -U ~/.journal.gvimrc -geometry 100×40+512+400 + ~/.journal

[/cc]

[cc escaped=”true” lang=”bash”]

# m h dom mon dow command
0,30 8-20 * * mon-fri ~/.journal.sh

[/cc]

Simple and effective… but not a lab notebook. It does help test a few criteria for what I need that notebook program to do to fit in with my workflow though:

  • It has to be *fast* and familiar to use. ’nuff said, really. 
  • It has to be distributed. I work on lots of machines; I need it to be available on them, or else I’d have to have it on something like a netbook that I’d carry everywhere and that’s just not practical for me (besides, what if I forgot it or it got stolen?)
  • It has to be encrypted. Everything I work on is NDA’d at one level or another; and design/debugging notes would definitely be too sensitive to leave lying around in plaintext format. With a physical notebook, I keep it locked away; but this is supposed to be better than that option…

Colleagues have used various products for this over the years – mindmapping software; emacs in one mode or another; and various other software. But none of those really appealed. Mediawiki did seem to be as good a fit as I could find, but something that depends on an entire LAMP stack to run is hardly lightweight; and while I could host it somewhere public, that’s not really very secure (I’d spend more time making sure the full LAMP stack was up to date and mediawiki too than I want to). Besides, I’d rather this be console-accessible if possible (yes, some of us are still happier that way 🙂 ).

I’ve been using vim since around 1993 or so; at this point it’s wired into my fingers. So when I saw vimwiki, it seemed ideal. For those who’ve not encountered vimwiki before, it creates a directory, and then every file in that directory becomes part of a rudimentary text-based wiki (which it can turn into a set of HTML pages so it can handle images and so forth, but you can also navigate it from within vim). It also has a diary function which works in a sub-directory of the wiki directory.

It doesn’t have any support for encryption or distribution. But that’s quite solvable.

The encryption is easy enough – you could use the blowfish encryption in (post-v7.3) vim but that proved a bit awkward as you had to reenter the password every time you navigated down a link (and I don’t always have post-7.3 vim available). This password entering every minute or so broke up my workflow, so no thanks. My netbook and work laptops all have whole-disk-encryption, so I just left the vimwiki directory as normal on those laptops, and on the machines where I don’t have whole-disk-encryption, I use eCryptFS to create an encrypted directory and put the wiki under that. Very simple indeed, but quite effective. Now even theft of the physical hard drive isn’t a major concern.

The distribution was equally simple; you could use any DVCS, but I’m fond of mercurial, so I decided to use that. You have to tweak the vimwiki script ( .vim/ftplugin/vimwiki.vim ) to call it:

[cc escaped=”true” lang=”vim”]

augroup vimwiki
au! BufRead /home/mdennehy/vimwiki/index.wiki  !hg pull;hg update
au! BufWritePost /home/mdennehy/vimwiki/*  !hg add <afile>;hg commit -m ” “;hg push
augroup END

[/cc]

But that’s a simple tweak at best. And you want to have ssh setup with keys for the easiest workflow, but you have that already, right? 😀

Then just modify the crontab script:

[cc escaped=”true” lang=”bash”]

#!/bin/bash
# Cron script for 30-min activity journal
#——————————————-

export DISPLAY=:0
/usr/bin/gvim -U ~/.journal.gvimrc -geometry 100×40+512+400 -c “call vimwiki#diary#make_note(v:count1)” + -c “r !date +’\%n= \%H\%Mh =\%n'”[/cc]

And add an Awesome keybinding and menu entry:

[cc escaped=”true” lang=”lua”]

vimwiki_cmd = “/usr/bin/gvim -U /home/mdennehy/vimwiki/.gvimrc -c ‘call vimwiki#base#goto_index(v:count1)'”

mymainmenu = awful.menu({ items = { { “awesome”, myawesomemenu, beautiful.awesome_icon },


{ “VimWiki”,vimwiki_cmd }
}
})

awful.key({ modkey, }, “w”, function () awful.util.spawn(vimwiki_cmd) end),

[/cc]

And now whenever I hit <Mod4>-w from within Awesome, it pops up a gVim window open at the root of the wiki; every 30 minutes it pops up a gVim window in today’s diary page with the time inserted automatically for a log entry; and whenever I hit save or switch buffers, it syncs the files up to a central server’s encrypted area.

Distributed, encrypted, fast and useful. I’ve been using it in the job for the last few months now and it does almost everything I need. I do still keep around the paper notebook though – no matter how good the program, we still don’t have anything that can do everything paper can do (doodle, take cornell format notes, sketch diagrams easily for later capture, that sort of thing), but vimwiki’s search function alone is making it the day-to-day workhorse and it’s making my life a lot easier. Notes on development, patent ideas, job review reports, sysadmin notes, notes on papers I’m writing, and a daily log, all in one easy-to-use package. Damn useful tool.


29
Apr 10

Python on the Nokia N900



One of the really attractive things about the N900 is the possibility of customising it. You see, I’m a bit of a geek at times. A nerd, if you will. And the idea of being able to tweak the way my phone works appeals to me, whether it be in fixing a bug I find in an app or out-and-out writing a new one for something the manufacturer hadn’t though of (and I think a lot of the iPhone’s appeal to geeks is represented by that too, app store policy issues aside). But that’s not an option on symbian. First off, well, programming for symbian is fairly horrible; and secondly, you’d need the source code to the app which most companies aren’t going to provide without a rather large chunk of change.

But the N900 runs linux and therefore it should be really easy to code for (if you’ve written code on linux before). And since Python is one of the easier ways to write an application, I wondered how long it would take me to get up and running with Python on the N900, from nothing installed through to getting “Hello, World” running. So here’s how it went. Continue reading “Python on the Nokia N900” »


17
Mar 10

Ben Nanonote with WiFi

Ben Nanonote

One of the reasons I love my Nokia e71 so much is that it’s a pretty decent example of convergence. Like the iPhone and others, it rolls so many features into one box that we’ve stopped calling these things mobile phones and started calling them mobile devices, almost without noticing. Heck, the ‘in-crowd’ just talks about ‘mobile’ as though the OED had recategorised that word from adjective to noun. It’s not so much linguistic arrogance as it is necessity – you have to go to science fiction or back to mythology to find examples of the kind of multifunction tool these devices have become and are still becoming.

The iPhone is without a doubt the poster boy for this, as its marketing is, ironically enough, pretty much founded on using it for things other than as an actual phone (and that’s why the iPad, daftly named as it is, will probably be a great success but not as great as its more dimunitive cousin. The whole attraction of the iPhone’s ability to be more than a phone is based on the fact that you are already carrying it around with you). One quick download and your phone becomes a translation device, a 2-D barcode reader, or any one of a few hundred other devices.

My problem is that I don’t really like the iPhone. It’s very slick and very pretty but… no background applications and a hefty price tag and to use it as intended, I pretty much have to have a mac. Sure, you can fake around that need, but it’s a chore. The Nokia e71 is wonderful in hardware (if you overlook the very poor camera which is hard pressed to handle the basic business task of recording the contents of a whiteboard after a brainstorming session — unforgivable given that mid-range phones handled this task better five years ago) but it’s awkward to setup with calendars and contacts and apps, even going through Ovi (which is why I’m still using a paper diary).

Once you decide against the iPhone and Nokia (and Blackberry because support for it in Ireland is again, all tied to one supplier and it’s not the best supported device here even though it’s huge in Asia), you’re pretty much left with the outliers right now, meaning Android. Yes, Android is an outlier. It gets great press without a doubt, but if you’re not a technology or gadget geek, it’s just another phone that’s a bit dingy-looking with its off-white case that doesn’t sit flat in a jacket pocket. Most people don’t know it’s a software platform, not a phone — and most of them wouldn’t understand what you meant if you told them (and amongst the real experts, btw, there are a few who don’t think much of it at all). And if you don’t mean Android, you’re right out there into the fringe at the moment. Which means stuff gets very interesting and individual indeed, which is where things like the 本 (běn) NanoNote come in:

Ben Nanonote from Qi Hardware

The Ben Nanonote looks like it might be a very interesting part of the fringe indeed. It’s small, but has a physical keyboard (humans like haptic interfaces for a good reason) and is completely open (both in hardware and software). Granted, it’s no speed demon – the iPhone ARM chips have a bit more oomph than it does – but even so, it could run a reasonably wide array of applications. It’s a long way from perfect, since it has no camera, no inbuilt wifi or inbuilt 3G or inbuilt WiMAX; but it’s intended as a first model and for a first model it’s got some promise.

Not least of which is that it costs around €70 at the moment. Add in the €60 you have to pay to get a supported microSD wifi card, and you’re still looking at less than a third of the cost of most netbooks over here. It’s a hobbyist platform rather than a serious do-work-on-this box at the moment, but looking at the upcoming Ya and Mu Nanonote platforms and seeing how building in wifi and other hardware is so possible, you have to ask the question of how long it’ll be until a commercial interest starts capitalising on the work Qi’s done here, and creates a larger market than just the hobbyist fringe. There’s a principle in open source software that the fastest way to change how something is done is to do it differently and release the code. Personally, I hope that trend holds true in hardware and we see a new market of palmtops acting as miniature netbooks; I would love to get a platform the size of the Nanonote, just with a few more networking options (as in, all of them – WiFi, WiMAX, 3G, LTE, the works). A true mobile device.

And yes, I still want an N900. If nothing else, it’d make a good stopgap measure 😀 In the meantime… well, €130 isn’t too much to drop to play with a toy like this, right?


10
Mar 10

Performance tuning a server in less than three minutes while being slashdotted

So you wrote a blog post about something that seemed fairly innocuous, but for whatever reason, it caught the attention of one of the major sites and now your server load is at 110 and climbing, the ssh command line session is taking thirty seconds to respond to anything at all, and given that your post is on the front page of slashdot at primetime, this doesn’t look like it’s a temporary blip. What do you do?

Burning computer

Okay, first things first. You don’t have time to do a proper fine-tuning session. You need a quick & dirty tuneup here. Proper fine tuning you leave till after the traffic spike, and you can then come back at it with a plan and decent tools like siege and so on – but this is the “fix it in three minutes” version. So if you see stuff here that looks crude to you, despair not, everything looks crude when you’re trying to do it in that kind of timeframe.

First, you have to be able to actually use the server command line. The load’s at 110 – that’s not coming down anytime soon and until it does you can’t do the rest of the work you need to since it takes so much time for even a shell to respond. The load is being caused by Apache and MySQL taking on more work than the server can handle, causing the server to swap excessively; and you’ve got to dump that work or shut off the incoming new work to recover. You can try sudo killall -9 apache2 if you can get an open ssh terminal in to do the job (and it’s the first thing you should try), but the odds are that that server has to be reset. Whether that means a phone call to the data centre, or a walk down the hall to the server room, or just clicking a button in a web interface, that’s the first thing to do. Don’t hold off, because unless everyone stops reading your page right now, that load’s not coming down.

Once the box has rebooted (and I mean immediately – sit there watching ping until it comes back), ssh in and shut down apache. MySQL is okay for now, but the work is coming from Apache, so that has to get shut down until everything’s ready to go again. You’re going to lose a few minutes of service, yes, but that’s recoverable from for a blog (and if this is for something more serious than a blog, you’re going to be in trouble for not properly spec’ing the hardware to begin with anyway, so limit the damage now and take your licks later).

At this point — whether you’ve just logged in or whether you managed to run killall successfully —  if it’s a debian server you should run sudo /etc/init.d/apache2 stop (did I mention I love debian for having scripts like that? No man-paging apachectl, just an easy to remember standard interface for every service on the box. Wonderful). It’ll tidy up from killall, or it’ll shut down apache cleanly, depending on how you got here.

I’m going to use my server as the example here, by the way, since it’s what got burned last night and it’s what prompted me to write this refresher — it’s been two or three years since I last had to maintain a server that was near its capacity, the experience was a bit of a flashback 🙂 So, some background on my server – I moved my blog from wordpress.com to here, on a Hetzner server. It’s their entry-level dedicated server offering 2Gb of RAM, a 64-bit Athlon, 160Gb of hard drive in a hardware RAID-1 array and a 1Gbit NIC) — all running Debian Lenny (and no, I’ve no relationship with Hetzner, they were just the cheapest of the places various friends recommended). WordPress is up to date on my server (2.9.2 at the time of writing), as is the Lenny install — if you don’t have the latest security fixes and such in place, or your WordPress is outdated, then that’s probably adding to your problem, but for a quick fix like this, that’s too big a job. Get through the traffic spike and deal with it later.

And yes, that server spec is overkill for my needs really – but I had a bunch of side projects like RangeClerk (don’t bother, not much is up yet) and the blog for Herself Indoors and her book and some other things I wanted to run as well that would be using wierd php and python modules and libraries and the like; and I just hate cpanel and not being able to install anything I wanted. Plus, it was cheap 😀

Right, back to it.

The first thing we need to do is to sort out MySQL’s configuration. Open up the my.cnf file, whereever you’ve put it (it’ll be /etc/mysql/my.cnf for a stock Debian install). We need to tweak just a few settings. First off, key_buffer. This is probably the most critical variable here because by default all the tables will be MyISAM tables (if they’re not, then this isn’t so critical). It’s set to about 16Mb by default; we’re going to turn that up quite a bit. On a dedicated database box, this would be set very high – anything up to 50% of the total available memory. In this case, with a full stack on the one box, we set it a bit lower since Apache’s going to want a lot of RAM too – 256Mb will do for a starting value.

Next we’re going to disable the InnoDB engine completely to cut down on MySQL’s footprint. Again, WordPress by default isn’t using it. Just ensure skip-innodb is either uncommented or inserted into my.cnf.

Lastly, we’re going to enable the query cache. The thing is, MySQL’s query cache is a fairly blunt instrument. If a query is precisely the same the second time it comes in, it’ll hit the cache – but any change at all, no matter how small, and it misses the cache. So it’s not as enormously useful as you’d first imagine. However, it does help, so we’ll increase its size modestly (48Mb of RAM is sufficient here). So our changes to my.cnf look like so:

[cc lang=”ini”]
key_buffer = 256M
query_cache_limit = 16M
query_cache_size = 48M
skip-innodb[/cc]

Once those changes are made, sudo /etc/init.d/mysql restart will get the MySQL server up and running with the new setup.  Once that’s done, let’s look to the next level in the stack – Apache. Under debian the config files are arranged differently than normal; the configuration changes we’ll make will be in /etc/apache2/apache2.conf but in other installations they would be in httpd.conf or elsewhere.

The default Apache install uses the prefork MPM setup – one thread per process. It’s older, slower, less efficient, but less buggy than the worker MPM which isn’t threadsafe. So find the prefork MPM config settings in apache2.conf. They should look like this in a default install:

[cc escaped=”true” lang=”apache”]
<IfModule mpm_prefork_module>
StartServers          5
MinSpareServers       5
MaxSpareServers      10
MaxClients          150
MaxRequestsPerChild   0
</IfModule>[/cc]

We’re going to cut down a lot on how much work Apache takes on at once here. Yes, some users will have to wait a few seconds to see your page – but right now, with the load at 110 and climbing, they could wait until their browser timed out and they’d never see anything. So we reduce slightly the number of servers that Apache will farm off to handle requests at any one time from 5 to 4; we’ll increase the number of spare servers it’ll keep around to hand off requests to (we want to reduce the overhead of starting and stopping those processes) from 10 to 12. We’ll set an upper limit on how many we can have though, and we’ll keep it to just under 100. This works on my system, which is an entry-level system; you might get away with more, but for now use these settings and it’ll get you up and running and you can increase a bit and check again as you go (and this guide really isn’t aimed at big sites anyway, just small ones like mine which were caught on the hop). We’re also going to ensure no apache process takes on too much at once by creating a limit of how many requests any process can take on – we’ll keep it low for now (3), but it can be increased later. So our changed config settings now look like this:

[cc escaped=”true” lang=”apache”]<IfModule mpm_prefork_module>
StartServers          4
MinSpareServers       4
MaxSpareServers       12
ServerLimit           96
MaxClients            96
MaxRequestsPerChild   3
</IfModule>
[/cc]

Okay. At this stage, you have two options. The first is to start Apache up again and get back to work. Odds are, this will hold up pretty well – but you want to keep a window open with htop running in the background to keep an eye on things (and mainly you’re watching the swap space usage and the load. The former’s critical, the latter indicative that a problem’s arising – if either go sideways, kill apache and edit apache2.conf setting even lower values for ServerLimit, MaxClients and MaxRequestsPerChild before restarting apache). If that’s your preferred option, skip to the end of this post.

However, if you want to take that extra step, we could install memcached quickly here. It’s a very effective load reducer and under debian, it’s far easier than you’d expect:

[cc lang=”bash”]sudo aptitude install build-essential php5-devel php-pear memcached[/cc]

And let that haul in whatever other libraries it needs, then:

[cc lang=”bash”]pecl install memcached[/cc]

And once that’s done, edit the php.ini file (in Debian, that’ll be /etc/php5/apache2/php.ini ) and insert this (anywhere in the file will do, but the extensions section is the tidiest):

[cc lang=”ini”]extension=memcache.so[/cc]

That should be memcached installed and running in a default configuration (we can finetune later). We now need to drop in the backend that WordPress uses to take advantage of memcached. Download object-cache.php and copy it into the wp-content directory of your website and change the permissions and ownership of the file:

[cc lang=”bash”]cd [insert your www/wp-content directory here]
sudo wget http://plugins.trac.wordpress.org/export/215933/memcached/trunk/object-cache.php
sudo chown www-data.www-data object-cache.php
sudo chmod 644 object-cache.php[/cc]

And that’s it done. Quick, dirty, and everything at default, but that’s a three-minute setup for you (well, maybe five if you do the memcached setup as well, and I am assuming you have a fast net connection for the aptitude step, but still).

Now, restart apache and everything should fire up with memcached caching a lot of requests and keeping the server load to a managable level.

[cc lang=”bash”]sudo /etc/init.d/apache2 force-reload
sudo /etc/init.d/apache2 restart[/cc]

And once that traffic spike is past… take the time to tune it properly!


Stochastic Geometry is Stephen Fry proof thanks to caching by WP Super Cache