Archive for the 'linux' Category

Hawk’s Shirt of the Day 2014-02-11

Born to Frag

Born to Frag

This is a very old shirt.  I probably got it in the late ’90’s.

It’s Tux, the Linux penguin, wearing a helmet with “Born to Frag” on it, holding a BFG with a “No Windows symbol” sticker on it.

Hey look, the site still exists!

Hawk’s Shirt of the Day 2014-01-30



This is a very old shirt, back when DVD’s were relatively new. The idea was to watch DVDs in linux. The manufacturers said, “No.” So the linux community hacked it, and we watched DVDs in linux. This was celibrated by linux users all over the world. But not the CCA.  The CCA got a judge to issue an order to remove the code from their website.  So they came up with dozens of ways to distribute the code in a way which was not immediately interpretable by a computer.  This shirt is one of those ways: DVD CSS cracking code is on the back.

I got this shirt to help support the idea of watching DVDs on whatever OS you want. After all, I bought the disc, and therefore I should be able to watch it on any OS/device. The manufacturers probably would disagree with this idea. They probably want me to pay each time I watch the thing, and only on devices which they approve. Screw that!

Hawk’s Shirt of the Day 2013-10-06

sudo make me a sandwich. okay.

make me a sandwich.
what? make it yourself.
sudo make me a sandwich.

Epic!  This is yet another collaboration between XKCD and Thinkgeek.  See original comic here:

In linux, you normally operate as a regular user, such as ‘jedihawk‘.  But if you need to make a modification to the system, this usually requires special privileges.  The user who can do anything on the system is known as ‘root‘.  To get to ‘root‘ from a normal user, use the command ‘su‘.

BUT, if you want to remain a normal user, and still do something as ‘root‘, use the command ‘sudo‘.

The command ‘su‘ originally stands for “Switch User”, because you can use it to switch to any other user on the system (if you know the password).  But it has also become known as “Super User”, as the user (‘root‘) who can do anything.

Installing ZFS in CentOS


# yum -y install fuse-devel libattr-devel libaio-devel libacl-devel zlib-devel fuse-devel scons openssl-devel

When installing ZFS-Fuse itself, try this first:

# rpm -Uvh

It didn’t work for me as fails to resolve. 🙁

Also try:

# yum -y install zfs-fuse

But that didn’t work for me either, so compile from source:

# cd work
# wget

If the file you downloaded is 0 bytes long (that’s how it is as of this writing), grab it from my server:

# wget

Then continue the installation:

# tar xjvf zfs-fuse-0.7.0.tar.bz2
# cd zfs-fuse-0.7.0/src
# scons
# scons install

That should do the basic installation, but it doesn’t handle the start-up scripts. You’ll have to do those manually, too:

# cd (installation dir)/contrib
# cp -iv zfs-fuse.initd.fedora /etc/rc.d/init.d/zfs-fuse
# cd /etc/rc5.d
# ln -s ../init.d/zfs-fuze S07zfs-fuse

And the config script (though it’s not strictly necessary unless you’re actually going to change any of the settings):

# cd (installation dir)/contrib
# cp -iv zfs-fuse.sysconfig /etc/sysconfig/zfs-fuse

Because the start-up script has the incorrect path hard-coded, let’s set up some symlinks (or you could edit the start-up script, if you’re brave):

# ln -s /usr/local/sbin/zfs-fuse /usr/sbin
# ln -s /usr/local/sbin/zfs /usr/sbin

There, done. Hopefully. Reboot it after you build a zpool to see if it really works or not.


ZFS is awesome.

ZFS is a type of filesystem developed with certain attributes in mind, such as verifying its data to within an inch of it’s eternity. But what really interested me is the way you can add a block device (or a file) to it and it gets incorporated into the existing filesystem and increases the available storage dynamically!

ZFS isn’t available as a linux kernel module due to some kind of incompatible source licensing or something, so you have to hook it in using FUSE (Filesystem in User Space).

Here’s the quick-and-dirty way to get it installed in Debian/Ubuntu:

# apt-get install libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
# apt-get install zfs-fuse

After getting it installed (make sure the /sbin/zfs-fuse daemon is running), I tried it out using a few regular files (not block devices like entire drives).

First, create a 100M file:

# cd /home/jedihawk/zfs_research
# dd if=/dev/zero of=zpool1 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.134208 s, 781 MB/s

Next, make a “ZFS Pool” on it:

# zpool create hawkfilepool /home/jedihawk/zfs_research/zpool1

And that’s basically it. Now I have a ZFS filesystem in a “ZFS Pool” called ‘hawkfilepool‘ and it’s already mounted (from the root) and it works and everything:

# zpool list
hawkfilepool  95.5M  95.5K  95.4M     0%  1.00x  ONLINE  -
# mount | grep zfs
hawkfilepool on /hawkfilepool type fuse.zfs (rw,allow_other,default_permissions)
# df -h | egrep 'hawk|Filesystem'
Filesystem      Size  Used Avail Use% Mounted on
hawkfilepool     64M   21K   64M   1% /hawkfilepool

A “ZFS Pool” is a collection of storage devices, such as drives or files, which make up one single ZFS filesystem. From linux’s viewpoint, there is only one block device.

Notice something here: The available space as reported by filesystem tools such as df report a different available space than the zpool tool. This is because of all the trouble ZFS goes through to guarantee data integrity. It takes more space to do that, but it’s less noticable with larger filesystems. So let’s make it larger…

# dd if=/dev/zero of=zpool2 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.138231 s, 759 MB/s
# dd if=/dev/zero of=zpool3 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.139163 s, 753 MB/s
# zpool add hawkfilepool /home/jedihawk/zfs_research/zpool2 /home/jedihawk/zfs_research/zpool3
# zpool list
hawkfilepool   286M  85.5K   286M     0%  1.00x  ONLINE  -

How awesome is that?!? The available storage went up automagically! I didn’t have to unmount/remount anything!

I copied some large .mp3 files into it, played them, and everything worked fine. I love ZFS!

Here’s what it looks like after the third 100M file, plus some test .mp3 files:

# zpool list
hawkfilepool   286M   214M  72.7M    74%  1.00x  ONLINE  -
# df -h | egrep 'hawk|Filesystem'
Filesystem      Size  Used Avail Use% Mounted on
hawkfilepool    254M  213M   41M  84% /hawkfilepool

I can’t wait to add a bunch of USB drives together into one huge ZFS pool!


My New Drive Cloning Station

The computer is that small thing in the small case on the bottom shelf. I’ve removed it’s original, inadequate (100W), power supply and replaced it with a more modern 400W power supply. Naturally, it doesn’t fit in the case, so it’s sitting on it’s side on the shelf at a slight angle behind the case. You can just see the top of it.

On the top shelf is the source drive. I put the source drive on top to give it the best chance to remain cool. That red glowing case fan helps, too. I just clamped it to the shelf support. The target drive is sitting on top of the computer case. Both are SATA drives (the red cables).

The system didn’t originally have SATA capability (that’s how old it is), so I bought an expansion card and installed it into the only available PCI expansion port. You can see it in the first image, it’s the vertical card with the two red SATA cables. But the system has two IDE slots on the motherboard so I left the IDE black ribbon cable hanging out the front slot (which was used for an optical drive) just in case I need to get something off of an old IDE drive. Or to run SpinRite on it.

The little USB drive connected to the front USB slot has a bootable version of linux called Trinity Rescue Kit. It’s a nice little product with all kinds of nifty tools in it. But the one I’m using now is called ddrescue. Here’s what it looks like in operation:

Here’s the command I used:

ddrescue -v -r 1 /dev/sda /dev/sdb /trk/logs/clone_20120917.log

The -v means output lots of information (verbose).
The -r 1 means re-try a troubling sector once.
The rest is the source device, the target device, and the filename of the log file. I had to make the /trk/logs directory (not there by default). It’s stored on the USB drive, root dir, in case I need to resume later on. If you don’t specify a log file, you won’t be able to resume automagically later on. I’ve had several power failures here in glamorous Hollyweird on this cloning operation, but because I specified a log file, ddrescue just picked right up where it left off and continued the operation.

WARNING: Make sure you know which drive is the source drive, and which drive is the target drive! Use these commands and compare serial numbers:

hdparm -i /dev/sda
hdparm -i /dev/sdb

That’s assuming that your drive devices are /dev/sda and /dev/sdb. For any cloning operation, both of the drives should be the same make and model. I made notes of the full serial numbers on each drive so that I would be sure to do it right.

What you can’t see, due to the camera’s flash, is this cute little USB-powered desk lamp I got from ThinkGeek. Isn’t it cute?

Ubuntu wants to restart often

After most updates, Ubuntu wants to restart/reboot my system.

In a word: No.

I’m not running a Microsoft product here. One thing I like about linux is that it can run virtually forever. I run my system for months, sometimes years, between reboots. Why? Simple: I have a lot of stuff open. It takes a long time to set all of it up and get it running. I’m not going to do that every other day or every week. Not even every month. If that means I only update once or twice a year, then so be it.

How to make a drive into a file in linux

A friend of mine asked me to help him make a copy of two drives. These drives work together on a RAID 0. With a RAID 0, you have approximately twice the speed, but also twice the chance of failure.

So, understandably, they would like to have a backup of the system. But they don’t want something that has to be installed, and then configured, and all that stuff. They are not “linux people”. They want something where they can yank out the old drives and insert two new drives and have it “just work”.

Here’s the overall procedure:

  • Make a bootable Knoppix USB drive.
  • Connect a new drive to the system. This new drive will store the two source disk images.
  • Boot Knoppix and Ctrl-Alt-F1 into command-line mode.
  • Make a file from the entire 1st disk and put it on the new disk.
  • If the other drive is not connected, shut the system down and connect the other drive.
  • Make a file from the entire 2nd disk and put it on the new disk.
  • Disconnect the new drive & the USB drive, and reconnect the originals.
  • Boot it up again and watch it to make sure all is well.

For the linux guru, here are the commands I used:

fdisk -l

fdisk is usually the tool you use to set up partitions from the command-line. Using it like this with the -l switch gives you a list of all the partitions on all connected drives, whether they are mounted or not. Very useful.

dd if=/dev/sdc | gzip > drive1_250g_2011-08-19.img.gz

This is the magic linux command used to actually make the file from one whole disk. dd is a command which simply routes data. Every device in linux is a file, so it’s easy to read an entire disk. In the above case, dd is getting it’s data (if = In File) from the device which represents the entire disk: /dev/sdc. sdc means the third SCSI drive detected. On your system, it may be sda or sdd.

By default, dd outputs data to STDOUT, so here I’m piping it into the gzip command, which compresses data. If you don’t compress data, then you might end up with a file which is bigger than you expect when reading raw filesystems, due to the way filesystems lay down data with sector markers and whatnot.

Finally, I’m redirecting the output from gzip into a file, and I’m making sure that it has an extension which describes what kind of file it is. But you can call it whatever you want.

The first time I tried this command, I used bzip2 because it has better data compression routines and therefore makes smaller files. But better compression also means more CPU computations, and that alone slowed down performance to an unacceptable level. Why an unacceptable level? This server that I was working on had to be fully online by 7am. I took it offline and started work around midnight. This gave me about 6 hours to copy the drives. I gave myself an extra hour to debug and get it put back together and installed where it was in the rack, which should be plenty of time.

Using the above dd command with bzip2, I was able to copy about 9-11 MB / sec. This put the total time for one drive at about 6-8 hours. I didn’t have that much time.

Using the above dd command with gzip, I was able to copy about 40-45 MB / sec. This put the total time for one drive at about 90-105 minutes. Now I had enough time in one night to copy the drives. Additionally, the files were very small with gzip, so bzip2 would not have made a significant difference but would have taken much, much longer.

How did I know the rate of copy using dd?

kill -USR1 (pid of dd)

First, find out what the Process ID is for the running dd command. To do this, Ctrl-Alt-F2 into the second virtual terminal and run:

ps aux | grep "dd if"

The PID is the first number on the dd line. Once you’ve got that, use the above kill -USR1 command and dd will output several lines of useful information (in the virtual terminal where it’s running), including how much data has been transferred and the rate.

I set it up so that it would do this automatically every 5 minutes like this:

while kill -USR1 (pid of dd) ; do sleep 300 ; done

This while loop runs the stated command until that command returns false. The body of the while loop simply sleeps for 5 minutes. After starting this loop, Ctrl-Alt-F1 back into the first virtual terminal to watch dd‘s progress.

I was fully successful in copying the two drives to files and had the server back in operation before 5am.

Once I got home, I ordered up two new drives exactly like the old ones. Well, mostly exactly like the old ones. Once they arrived, I connected one to my SATA-to-USB drive adapter and ran this:

zcat (filename) | dd of=/dev/sdx

MAKE SURE YOU KNOW WHAT /dev/sdx is! Running this command on the wrong device will wipe out whatever is on that device, including your main linux filesystem!

How to find out the device you just plugged in?

dmesg | tail

From the output of the above command, you should be able to figure it out.

Decompressing on the fly and writing to a drive like this via a SATA-to-USB interface will be very slow. The max rate I achieved was 4M / sec. More commonly, I achieved 3.8M / sec, which is a little more than 1M every 5 minutes. That puts the total time up around 18 hours per drive. But hey, I was in no hurry. The drive finished up over night and then I did the other one. It was the same command but with a different filename.

Hope this helps.

Apache 2.2.19 on Ubuntu 9.10

We’re running a webserver on Ubuntu 9.10. For various reasons, we decided to upgrade Apache without upgrading Ubuntu. Docs for Apache I followed:

And here’s what I did:

1) Download the latest Apache source tarball. In my case, it was Apache 2.2.19.


2) Extract.

tar xjvf httpd-2.2.19.tar.bz2
cd httpd-2.2.19

3) Configure the build to be like our existing Apache install. Use your own ./configure command.

./configure --prefix=/usr/local/apache2 --enable-mods-shared="all authn_file authz_default authz_groupfile authz_host authz_user deflate php5 proxy_http proxy rewrite ssl"

4) Build it.

make && echo done with step: make
make install && echo done with step: make install

5) Now comes the hard part. I had to migrate our existing config tree over to this new custom source compile. I started off by copying the entire /etc/apache2 dir to /usr/local/apache2/conf

cp -a /etc/apache2/* /usr/local/apache2/conf/

I tried several configs, but the easiest thing to do was to rename Ubuntu’s old apache2.conf file to httpd.conf as that is the default config file Apache uses when it starts up. I also took out Ubuntu’s Include line which included the file httpd.conf as I don’t want it trying to include itself endlessly.

After that, I just needed to update the config.

I updated the ServerRoot directive:

#ServerRoot "/etc/apache2"
ServerRoot "/usr/local/apache2"

I changed Apache’s default user to Ubuntu’s standard www-data user:

User www-data
Group www-data

Instead of keeping the tree of mods-available and mods-enabled and all those separate files and symlinks, I just put it all into the main httpd.conf file. That works best for us; easier to maintain.

I used vim to edit our new httpd.conf file. From within vim:

:r !cat mods-enabled/*.load
:r !cat mods-enabled/*.conf

Order is significant here; proxy_module first, then proxy_http_module:

LoadModule proxy_module /usr/local/apache2/modules/
LoadModule proxy_http_module /usr/local/apache2/modules/

I updated the paths for each module. A standard Ubuntu module looks like this:

LoadModule log_config_module /usr/lib/apache2/modules/

But with my new custom compiled Apache, I needed this:

LoadModule log_config_module /usr/local/apache2/modules/

… because my new Apache lives in /usr/local/apache2.

I kept Ubuntu’s PHP5 binary install:

LoadModule php5_module /usr/lib/apache2/modules/

After all that config stuff, it was time to test it out. I temporarily changed the ports from 80 to 180, and from 443 to 1443 (for SSL), and tried it out.

/usr/local/apache2/bin/apachectl start

At this point, I have our original Apache running alongside my new custom-compiled one. Cool. But I’m done testing.

/usr/local/apache2/bin/apachectl stop

Once I’m done testing and I see that it’s working on non-standard ports, it’s time to put the ports back and go live. But there is one more important step here: the startup/control script!

Ubuntu uses /etc/init.d/apache2 as the control script. It has some fancy functions in it, sets some things using environment vars, and other stuff I don’t need. So I used some functions from Ubuntu’s startup script, and some stuff from a generic startup script and came up with this:


# init

# functions

pidof_apache() {
  # if pidof is null for some reasons the script exits automagically
  # classified as good/unknown feature
  PIDS=$(pidof httpd) || true

  [ -e $PIDFILE ] && PIDS2=$(cat $PIDFILE)

  # if there is a pid we need to verify that belongs to apache2
  # for real
  for i in $PIDS; do
    if [ "$i" = "$PIDS2" ]; then
      # in this case the pid stored in the
      # pidfile matches one of the pidof apache
      # so a simple kill will make it
      echo $i
      return 0
  return 1

# start

case "$1" in

echo "Starting Apache ..."
/usr/local/apache2/bin/apachectl start

echo "Stopping Apache ..."
/usr/local/apache2/bin/apachectl stop

echo "Restarting Apache gracefully..."
/usr/local/apache2/bin/apachectl graceful

echo "Restarting Apache ..."
/usr/local/apache2/bin/apachectl restart

if [ -n "$PID" ]; then
  echo "Apache is running (pid $PID)."
  exit 0
  echo "Apache is not running."
  exit 1

echo "Usage: '$0' {status|start|stop|restart|graceful}" >&2
exit 64


exit 0

I renamed Ubuntu’s default Apache control script…

cd /etc/init.d
cp -a apache2 apache2.orig

… and installed the above into apache2

(copy script into your clipboard)
cat > apache2
(paste script)
^D (Ctrl-D for Done)

The above procedure will completely overwrite your apache2 script, so be sure you make a copy of it first.

And now for the moment of truth:

./apache2.orig stop && sleep 1 && ./apache start

I hit our website and all was well. We were down for about 2 seconds. I can live with that downtime.

The nice thing about this is that if it doesn’t work, I can always go back to the stock, binary Apache that comes with Ubuntu.

Hope this helps.

HJ-Split Multi-Platform File Splitter

I can recommend HJ-Split.

I have used HJ-Split in Windows, to split up a huge file, and send it to a friend of mine on a Mac. There’s no way he is ever going to install it into his Mac, so I’m not sure if that version works, but HJ-Split works great in Windows.

I also used it to break up a large file and get it home and into my Linux system. I used the software to break up a 31G file into 1.5G pieces, put the pieces on my 32G USB drive, took it home, copied all the pieces into my Linux machine at home, and then pieced it back together again using the ‘cat‘ command. Worked perfectly.

Networks are getting worse

It seems to me, networks (in general) are getting worse.

My network here at home, AT&T U-verse, sucks in general. It works enough of the time that I don’t bother looking for another. But it’s not good.

Example: I was downloading a radio show from one of my favorite DJ’s, S.A.M.E. Radio (Steve Anderson Music), and the file gets to about 60% or so and then just stalls out. My downloader (wget in this case) eventually times out and then automatically re-tries with resume, and gets the rest of the file. This happens every week.

Example: I help a local non-profit down the street with their networking needs. Almost every day, their network fails completely, for no apparent reason. We’ve tried changing routers, other networking equipment, testing for zombie machines hogging network, etc. We’ve called the guys out there to check the line; they say all is well. But I don’t think so. No one seems to be able to find out the cause and fix the issue. It’s an ongoing issue.

Example: Recently, I needed to transfer a huge file (about 30G) from a remote server to my own machine or one of the servers that I control. It was basically a backup. It took me about a week to finally get this friggin’ file. I tried transferring with gFTP (Linux) but that timed-out. I tried transferring with scp from the command-line, but that timed-out too. I tried using wget/browser via Apache, but that also timed-out. I tried initiating the transfer from the server to me, and from me to the server. Nothing worked; the transfer started off great and fast, but then just fell to nothing and eventually timed out. The only thing that finally worked was this: I sent the file to another linux server I control (which happened relatively quickly but still took all night), then transfered the file to my work computer, then broke it up in to 1.5G pieces so that it would fit in a FAT32 filesystem, then put it on my 32G USB drive, then brought it home and copied all that into my home linux machine, then cat’ed it all back together again. All because of crappy networks.

Example: I was downloading some software from SourceForge with wget on the command-line (of course). It got 80% done, then just timed out. I let it sit there for a while and it finally re-started, resumed, and finished the file.

Failures like this didn’t used to happen. Seems like they are happening a lot more these days. What’s the world comin’ to?!?

Gentoo Linux sucks

When I first installed Gentoo, I thought it was pretty good. It was not as easy as other distros (such as Ubuntu), but it gave me lots of control on how I wanted the system configured and set up and I really liked that. I liked it so much that I was considering using it for my main development workstation here at home.

At work, we have an old mail server set up on Gentoo. For some reason, another tech here rebooted it. It would not come back up. This happened in the middle of the afternoon, which is not the best time for things like this to happen. So I connected to it with a Remote KVM (Keyboard, Video, Mouse) device and saw that it had failed to boot up. It was sitting there at the console with a message like this:

fsck.ext3: No such file or directory while trying to open /dev/sda3
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193

This looked like a standard problem where I would need to manually run a repair on the filesystem and then reboot it. However, I soon discovered that this was a very different issue.

Searching around, I found all kinds of stuff:

I tried re-compiling the kernel with different options (about half a dozen times). I tried upgrading/downgrading specific packages such as “udev”, but that gave me lots of problems due to Portage bitching about USE flags, dependencies, masked packages, and so on. Nothing but problems with Portage. I tried boot-time options in the “kernel” line, at least 3 or four different ones. I tried many changes/tweaks/etc. Nothing worked.

Meanwhile, my boss is a very unhappy customer. And when I say “unhappy”, I mean the kind of unhappy where he begins to imagine doing bad things with weapons to my head. Luckily, he’s in another country.

But that doesn’t stop every other employee in the company from calling me up on my cell phone and telling me how I am personally ruining their lives for all eternity.

After a day or so of beating my head against the Gentoo Linux server, I said screw it and decided to migrate to a newer server. This involved getting the networking to work, creating devices in /dev, starting daemons such as sshd, and so on. But every step was difficult and cause me problems. And all the while, people calling me asking when the mail server will be back up.

I finally get all the data migrated over to the new server but things are set up very differently on Ubuntu. So it takes almost ALL weekend (working from home) to get the new server running right.

I can recommend Ubuntu. But I cannot recommend Gentoo. Why? Because all of this trouble and downtime was caused by an update. An update that Portage did at some point earlier. It was part of a whole system update. If the system had never been rebooted, I would still not know it. I would have no clue that the thing would not have booted up in the event of a power failure or whatever.

Go with Ubuntu. Leave Gentoo for those geeks who love problems (because that’s what you’ll get).

An open letter to Jeff Bezos (of

Dear Jeff,

I have a lot of respect for you. I think you’ve done amazing things with Indeed, truly remarkable things. When idiot investors were walking away from your presentations, you were creating the amazing future which eventually became

Example: You started as a book store, then just took it to the moon. Outstanding!

Example: Amazon created wish-lists. I don’t know if y’all were the first, but was the first site I found that did wish-lists. This made it easy for me to keep track of the items I was saving up for.

Example: Amazon One-Click. It’s very convenient, fast, and easy to set up. Makes shopping on a cakewalk! No one else did that, and not many other sites do that now either.

Example: Amazon Associates. Made it possible for me to have an online bookstore without the actual bookstore (or moviestore, etc.).

Example: Amazon Marketplace. I’m not saying eBay hadn’t hit the mark first, but you guys there did a bank up job on the Marketplace and I still find amazing deals there, when I don’t want to pay full price, or when it’s out of print.

Example: Amazon MP3. I like being able to buy the songs that I like on an album, instead of being forced to buy the whole thing. In the past, I have purchased CDs for just one song on the disc. The downloaded MP3 is high bit-rate, and not DRM’ed. This is a big point, as I can put it on any of my audio player devices (like my phone).

Example: Amazon Unbox. I just may use Unbox for all my video purchases. I’m kinda undecided on it, but it’s still intriguing. I can buy a movie and start watching it within about 10 minutes (sometimes less) right on my big-screen computer. Nobody beats 10-minute shipping. Plus, it’s there for me forever (can’t lose it); that’s a big plus.

And finally… The Amazon Kindle. Wow! I love it! I just LOVE IT! This is the device that I want all my stuff on. This is the one. It’s just the right size, in my humble opinion, because I can slip it into my side pants pocket (even with the cover on). I love all it’s features, except the DRM’ed books. I can only read ’em on the device, not anywhere else; that’s a minus.

From my perspective, it looks like you are giving us people/consumers what we want. I think this is exactly the right thing to do. How do you know what we want? Do a survey. Doing surveys with enough people will give you a very accurate picture of what we want.

In a Charlie Rose interview with Marc Andreessen (creator of Netscape), Marc spoke about the Kindle. He said something like, “Oh, Kindle, I mean, it’s just–it’s gigantic.” He then went on about form factors… “The iPhone with a sort of three or four inch screen… a laptop or netbook with a 12 or 14 inch screen… and now you’ve got the Kindle with a sort of seven inch screen.” He and Charlie Rose go on to talk about others making a bunch of little “pads”, or “net pads”… Marc continues: “Somebody will figure it out. That thing, I mean, the Kindle does books and magazines and newspapers, but that form factor and that shape of a device and that weight in a couple of years is going to be doing video, it’s going to be doing music, it’s going to be doing video conferencing. It’s going to be doing telephony. It’s going to be doing Web browsing. It’s going to be doing everything, right? And so that’s the next — one of the fascinating things is that’s the next screen size and the next killer device, I think, is what’s going to happen.

In a recent interview, you were asked about putting other media (such as video) on the Kindle, and you said something like, “Would you use a Swiss-army knife at the dinner table?”

Actually, I would. But that’s because I’m a geek. And I have used a Swiss-army-knife-style spoon-and-fork thingy for eating a whole meal. It worked quite well, no problemo.

Anyway, here’s my point: I (the consumer) want a device just like the Kindle, which can do e-books (just like it does now), play my MP3s (with all the features of an iPod), full color e-paper, play movies (e-paper is almost there, you can watch demos on YouTube of video-capable e-paper now), with a big SSD (Solid State Disk) to hold my entire library (of everything, audio/video/ebooks/PDFs/etc), two additional SD card slots for my own SD cards, wireless access to the internet (just like it does now), GPS services (just like it does now), longer battery life, and a good internet browser for web surfing & email (such as Mozilla Firefox). You could even hook it in to Amazon Unbox and sell me movies directly to my Kindle. Cha-ching!

I would pay a lot for a device that did all that. I’d bet others would, too.

Cell phones come close, but they’re too small to read whole books on (plus they don’t use e-paper), and they’re also too small for movies/videos. The best a cell phone has going for it is that it’s already networked. I can do a lot with my little flip-phone: web, Gmail, Google Maps, Yahoo Mail, Facebook, Twitter, Flickr, texting, IMs, take pictures (and post them to Flickr), listen to my MP3s with sterio Bluetooth headphones, and so on, but it’s not a good e-book reader, and it doesn’t have a fast-enough processor for video.

Now, let me go back a step, back to before there was a Kindle. Sony had a pretty good e-book reader, but it wasn’t networked, you needed to have a computer to put the books into it, there were not a lot of e-books available for it, and it cost a friggin’ arm and a leg.

I would bet I could not have convinced anyone to put any money in to a project to create a cute little e-book reader which used e-paper, ran for days/weeks on a single charge, was networked (for free), had a deal with a big book-seller such as for all of it’s content, ran on a free, open-source operating system (Linux), and did all the other things (such as note-taking) that the Kindle does… and make it affordable. No way, bub. I would have been laughed at, all the way out the door.

Then Jeff Bezos creates and releases the Amazon Kindle. Bam!, now it’s possible. Before that, it was not within the realm of possibility. Now that it’s been released, the Kindle has shown that such a device is actually possible… and is really what people want.

NOW I could convince someone to make such a device. Indeed, I could convince someone (with money) to invest in a project to create the all-my-media-needs device described above, the one which is fully networked over the existing Sprint wireless network, portable (with long battery life), color e-paper, full-motion video, MP3/music player, with lots of storage, web browsing, email, VoIP, everything.

Google may already be working on such a device. Google has money to throw around, and they’ve got talent as well. Just look at what they did for cell phone operating systems with Android. They’ve also got a huge e-book project already in production, and one for the iPhone & G1.

The guy who owns that big news network, Rupert Murdoch, he’s got money to throw around. And he loves the Kindle. I’d bet he’s working on a device to do what you, Jeff Bezos, won’t let the Kindle become.

Let the Kindle fulfill it’s destiny. Create an API for it and let people play around inside it and create with it (just like Google’s Android OS for cell phones).

If you don’t, you will lose the market to devices which do what we want them to do, not necessarily what you want them to do. And if you lose the market, your dream of “everything ever printed available in the Kindle” will not happen.

Thanks for your time.

Humble Kindle owner

powernow-k8: failing targ, change pending bit set

When I first saw this message:

powernow-k8: failing targ, change pending bit set

I thought to myself, “Eh?” I had no idea what it meant. Then I did some research and discovered it was due to a hardware bug with some AMD processors, and that there was a process in linux (CentOS/Redhat) which was unable to update a bit due to this hardware bug. This process was responsible for changing the speed of the processor on the fly, to help save power.

However, this process then did the stupidest thing I’ve ever seen: it repeatedly displayed this message to the console:

powernow-k8: failing targ, change pending bit set
powernow-k8: failing targ, change pending bit set
powernow-k8: failing targ, change pending bit set
powernow-k8: failing targ, change pending bit set

This is, without a doubt, the most suppressive error message I’ve ever seen. Because it was displayed to the console once per second, I was unable to debug the issue using the console. The best I could do was configure one of the network cards (blind, I might add), so that I could then SSH in to the machine from elsewhere. Then I was able to debug it.

Any process which repeatedly displays anything to the console, overwriting anything you’ve got up there, on any virtual terminal, is a suppressive process and should be completely eliminated with prejudice.

Here’s what I did to stop these suppressive error messages:

# /etc/rc.d/init.d/cpuspeed stop
Disabling ondemand cpu frequency scaling:                  [  OK  ]

Now those annoying (and suppressive) error messages stopped appearing on the console, and I breathed a sigh of relief. However, the dang thing would start back up again after a reboot:

# chkconfig --list cpuspeed
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off

chkconfig is telling you that this process will start for all runlevels except 0 and 6.

Here’s what I did to prevent it from loading up again:

# chkconfig --del cpuspeed

Now check the status:

# chkconfig—list cpuspeed
service cpuspeed supports chkconfig, but is not referenced in any runlevel (run ‘chkconfig—add cpuspeed’)

There, it’s gone forever. This did not fix the hardware bug, it just prevented my system from trying to change the cpu speed and, failing, spewing forth suppressive error messages all over the console continuously.

I hope this helps.

PKG_CONFIG_PATH is wonderful

I’m trying to install GTK+ version 2 (2.10.4) on my CentOS 4.4 (Red Hat compatible) system. Right after I run the ‘./configure’ command, I get a complaint about having the incorrect version of glib installed:

checking for BASE\_DEPENDENCIES... Requested 'glib-2.0 >= 2.12.0' but version of GLib is 2.4.7
configure: error: Package requirements (glib-2.0 >= 2.12.0 atk >= 1.9.0 pango >= 1.12.0 cairo >= 1.2.0) were not met.
Consider adjusting the PKG\_CONFIG\_PATH environment variable if you
installed software in a non-standard prefix.

So I download and install glib from source. It installs perfectly, no problemo. Then I go back to my gtk install dir and run ‘./configure’ again, but I get the _same exact complaint_.

This is really, _really_ annoying.

The thing to understand about this is that the newer glib I just installed was placed in ‘/usr/local/lib’, rather than ‘/usr/lib’. But ‘/usr/lib’ is checked first! So the old version is found rather than the new one I just installed.


export PKG\_CONFIG\_PATH=/usr/local/lib/pkgconfig

This tells pkg-config where to look for the proper (newer) installed libraries. See here, this is the original:

> cat /usr/lib/pkgconfig/glib-2.0.pc


Name: GLib
Description: C Utility Library
Version: 2.4.7
Libs: -L${libdir} -lglib-2.0
Cflags: -I${includedir}/glib-2.0 -I${libdir}/glib-2.0/include

And this is the newer one I just installed:

> cat /usr/local/lib/pkgconfig/glib-2.0.pc


Name: GLib
Description: C Utility Library
Version: 2.12.3
Libs: -L${libdir} -lglib-2.0
Cflags: -I${includedir}/glib-2.0 -I${libdir}/glib-2.0/include


Technical issues like this are what is keeping Linux off the mainstream desktop.

Yahoo timeout vs Google timeout

A “timeout” value is how long you wait for something to fail.

I use my Linux desktop as my primary workstation, and my Windoze machine is for anything that I can’t do in Linux, like RAdmin and games.

In Linux, I let everything run. I let it run for months and months. I let it run until it dies. Then I start it back up again and let it run some more.

I usually don’t do this with my Windoze machine. I usually shut it down when I’m done with it, or put it in Stand-by mode.

Okay, back to my point… In Linux, I’ve got two browsers (at least) running all the time: Mozilla and Firefox. Mozilla is the full suite of apps, so that’s what I use for my email too. Firefox is just the Mozilla web browser. I’ve got Mozilla running on my left monitor, and Firefox running on my right monitor. This way I can log in to both my Yahoo Mail accounts at the same time.

Every day, I (usually) check my various email accounts. I have about a dozen. I have two for Yahoo, and just one for Google Mail. I keep a tab open for each in Firefox, and one tab open in Mozilla for my primary Yahoo account.

Here’s the point: Yahoo times out in 24 hours. This means I have to re-login every day. Google Mail sits there for two weeks, then I have to re-login. Google’s 2-week timeout rocks. Yahoo’s 24-hour timeout sucks ass.

Hey Yahoo, do you hear this? Your 24-hour timeout _sucks ass_. That means it’s _bad_, _too short_, and _annoying_. Also it’s a _pain in the ass_ to have to re-login every day. It’s a pain in the ass because I take full responsibility for the security of my computer… so _you don’t have to_. You don’t have to bypass my responsibility on this; I know what I’m doing with my logins.

Google seems to appreciate this. Thanks, Google. Keep up the good work!

MySQL ‘File not found’ error

I ran into a problem the other day with our production machine running MySQL 4.1.11.

I have an automated system setup to make a backup of the entire database nightly. During this procedure, I got this error:

mysqldump: Got error: 1105: File '[filename]' not found
(Errcode: 24) when using LOCK TABLES

I didn’t know what to make of it, as the [filename] that it was looking for was right there on the disk, and it had correct permissions. I ran the mysqldump program manually and got the same error, but on a different file. Much more research later and… as it turns out, the problem is NOT that it can’t find the file, but that the system had run out of file handles. I added this line to /etc/my.cnf:


Then I restarted the database and all was well.

This is one of those rare cases where the error message is not entirely helpful. I hope this helps explain things.

Haxial Calculator

I’ve been hunting around for a good calculator for a while. As part of this search (and before I picked up my TI V200), I downloaded a nice calculator called [The Haxial Calculator](, for Windoze (Windows). Thankfully, it also runs in Linux. They also have a version for the Mac.

The main design difference between this calculator and other calculator programs is that it’s designed to take advantage of the capabilities of a computer, rather than go through all the trouble to simulate a regular calculator device in a computer.

A very good example of a calculator device simulation for your computer is the [DreamCalc 3 Scientific and Graphicing Calculator]( This sucker rocks, is easy to use, does graphs, has a ticket (history) window, and I would have bought it until I found the [Haxial Calculator](

Here is a screenshot:

This is the standard calculator window. Many more screenshots are available on [Haxial’s website](

RH/FC RPM packages

Just found a good site for Red Hat / Fedora RPM packages:

Contains well over 1400 projects.

Hope this helps.

5 sec DNS delay issue

Ever since I first upgraded my main linux machine to Fedora Core 3, I noticed a 5 second delay (per DNS server) for every lookup. Well, almost every DNS lookup. If I ran ‘host’ on the command line, it was instant. But for all other programs, there was a very annoying 5-second delay / DNS server.

During my search for a solution, I found this site:

And therein lies the problem and solution: IPv6. By default, FC3 tries to use IPv6 for all DNS lookups. When that timed out, then it would use regular. IPv4.

So I disabled IPv6 using the command from the guide linked above:

(as root)   echo "alias net-pf-10 off" >> /etc/modprobe.conf

And then (believe it or not) you need to reboot.

Hope this helps.

Archives and Links