Tried to upgrade node from 0.4.5 to 0.6.x and my my micro kept falling over dead. I know it's an edge case, but it's an annoying set of symptoms that I figured I should post in case someone else runs into the same issue.
tl;dr => It's not a node problem its an AWS kernel issue with old AWS AMIs and Micro instances
So I have a micro that's about a year old, i.e. beta AWS-AMI, but i gather the same problem happens with pretty much every AMI prior to 2011.09. I was running node 0.4.5, but had started using 0.6.4 on my dev and some modules were now dependent on it. Since micro instances go into throttle mode when building anything substantial, i hope to use the build from my dev server. The dev machine is centos, so i crossed my fingers, copied the build over and ran make install. No problem. Then i tried
npm install -g supervisor and it locked up. Load shot up, the process wouldn't let itself be killed and i got a syslogd barf all over my console:
Message from syslogd@ at Wed Dec 28 00:58:19 2011 ... ip-**-**-**-** klogd: [ 440.293407] ------------[ cut here ]------------ ip-**-**-**-** klogd: [ 440.293418] invalid opcode: 0000 [#1] SMP ip-**-**-**-** klogd: [ 440.293424] last sysfs file: /sys/kernel/uevent_seqnum ip-**-**-**-** klogd: [ 440.293501] Process node (pid: 1352, ti=e599c000 task=e60371a0 task.ti=e599c000) ip-**-**-**-** klogd: [ 440.293508] Stack: ip-**-**-**-** klogd: [ 440.293545] Call Trace: ip-**-**-**-** klogd: [ 440.293589] Code: ff ff 8b 45 f0 89 .... ip-**-**-**-** klogd: [ 440.293644] EIP: [
] exit_mmap+0xd5/0xe1 SS:ESP 0069:e599cf08
So i killed the instance. Figuring it was config diffs between centos and the AMI, i cloned my live server and fired it up as a small to get decent build perf. Tested 0.6.4, all worked, brought it back up as a micro and, blamo, same death spiral. Back to small instance, tried 0.6.6 and and once again as a small instance it worked, but back as a micro it still had the same problem.
Next up was a brand new AMI, build node 0.6.6 and run as micro. Everything was happy. So it must be something that's gotten fixed along the way. Back to the clone and
yum upgrade. Build node, try to run, death spiral. Argh! So finally i thought i'd file a ticket with node.js, but first looked through existing issues and found this:
which pointed me at the relevant Amazon release notes which had this bit in it:
yumto upgrade to Amazon Linux AMI 2011.09, t1.micro 32-bit instances fail to reboot.
There is a bug in PV-Grub that affects the handling of memory pages from Xen on 32bit t1.micro instances. A new release of PV-Grub has been released to fix this problem. Some manual steps need to be performed to have your instance launch with the new PV-Grub.
As of 2011-11-01, the latest version of the PV-Grub Amazon Kernel Images (AKIs) is 1.02. Find the PV-Grub AKI's for your given region by running:
ec2-describe-images -o amazon --filter "manifest-location=*pv-grub-hd0_1.02-i386*" --region REGION.
Currently running instances need to be stopped before replacing the AKI. The following commands point an instance to the new AKI:
ec2-stop-instance --region us-east-1 i-#####
ec2-modify-instance-attribute --kernel aki-805ea7e9 --region us-east-1 i-#####
ec2-start-instance --region us-east-1 i-#####.
If launching a custom AMI, add a
--kernelparameter to the
ec2-run-instancescommand or choose the AKI in the kernel drop-down of the console launch widget.
Following these instructions finally did the trick and 0.6.6 is happily running on my old micro instance. Hope this helps someone else get this resolved more smoothly.