I'm always impressed by how significant can be the speed improvement of a typical linux desktop, when just replacing a disk with classic rotational platters, by a solid state drive. The boot is immediately faster, the desktop applications appear almost immediately on screen, ditto for icons in the menu bar. Clearly a huge bottleneck disappears, and the processor is consequently used more intensively. Few years ago, we used to say that adding more memory was a good way to give a second youth to an aged linux box, nowadays putting a solid state drive in the box is an even better way.
During the migration of my build system for fedora packages, from plague to koji, I set up all the machines needed by this infrastructure as virtual boxes. This is interesting for builder machines, as it is possible to adjust their dedicated resources dynamically, depending on the number of packages to be built by the system. Unfortunately, I've quickly been bitten by a recurring bug in my kvm builders : as soon as they are configured with more than one virtual processor, processes begin to stall, the virtual machine becomes unresponsive, although there's no sign of abnormal disk, net, or cpu activity.
After some research, looking for the "soft lockup" expression that the guest kernel puts in the log files, which seems to be a rather common cause of problems, I narrowed the possibilities down to the clock source being used in the guest kernel. Booting with "clocksource=acpi_pm", and forcing the use of this clock source provided much more stability in my VM so far...
# echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
The gnome-keyring-daemon is now a drop-in replacement for ssh-agent. Until recently, I faced with several situations where the daemon silently crashes and all the ssh keys already loaded in the agent are lost. The way I use ssh is quite intensive, for example I'm a big fan of pdsh, the command dedicated to launch a command to a group of machines in parallel. Every machine is reached via ssh, so the ssh agent is heavily solicited when pdsh is executed. I suspected some bad data structures locking in the agent, when accessed in a multi threaded environment, like it is the case for pdsh. I filled this bugzilla, and proposed a patch that works really nice for me.
However, if rebuilding the gnome-keyring package is not an option for you, it is still possible to recover the session, and to start a new instance of an ssh-agent. The problem is that each terminal shell has an environment variable, SSH_AUTH_SOCKET, that points to the defunct ssh agent. So, the key of the hack is to symlink the control socket from the new ssh-agent instance to the socket of the crashed gkd, add the ssh keys again into the new agent, and you're done.
[bellet@monkey ~]$ env | grep SSH SSH_AUTH_SOCK=/tmp/keyring-v3d1mX/ssh SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass [bellet@monkey ~]$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-TJfoGI5899/agent.5899; export SSH_AUTH_SOCK; SSH_AGENT_PID=5900; export SSH_AGENT_PID; echo Agent pid 5900; [bellet@monkey ~]$ ln -sf /tmp/ssh-TJfoGI5899/agent.5899 /tmp/keyring-v3d1mX/ssh [bellet@monkey ~]$ ssh-add Enter passphrase for /home/bellet/.ssh/id_dsa: Identity added: /home/bellet/.ssh/id_dsa (/home/bellet/.ssh/id_dsa)
Since ten years that I provide a mirror for linux distributions, I noticed that the network pressure on mirrors noticeably decreased since the beginning of this activity, although the number of packages, and the number of media carrying data greatly increased at the same time. I remember that the 100 Mbits network pipe was fully saturated for the week following the release day of popular distros. Now, the traffic is (only) multiplicated by a factor two or three for a few hours before coming back to a regular volume. Download profile also significatively evolved, from massive iso files downloads for fresh content, to a more shaped traffic related to individual packages, that come as updates of a distribution, during all its life cycle. The bulk of iso images downloads certainly shifted from client-to-server, to peer-to-peer.
I completed the migration to koji of my own personal build system to handle a set of local RPM packages for Fedora. I'm quite satisfied, as the installation instructions are rather straightforward and well documented. The possibility to use external repositories for released fedora packages is needed in my case, as my local packages come as an addition to fedora packages, just like epel needs rhl/centos, or rpmfusion needs fedora.
The necessity to merge the external repo (which is huge!) with the local one (which is small!) causes the mergerepos process to take some time to complete, and some memory resources (2Gigs for the genpkgmetadata process), so it is better to reserve enough memory+swap on the koji builders to handle this task. The kojira service, which is dedicated to this periodic activity can be disabled, and the repositories regeneration manually started when needed.
The fedora releng team provides on FedoraHosted some scripts using mash and the kojihub api, to populate a tree with the latest packages for each release of the distribution being handled in koji, to sign packages and to push them to an central location. These scripts can be reused easily in a custom setup.
It's possible to replace an old "Core Duo" processor (T2600) with a "Core 2 Duo" one (T7600) on a Thinkpad T60p. These two CPU use the same socket-M, and are pretty well compatible. The major benefit is the availability of the 64-bit extension in the latter model. I didn't notice an increase of the cpu temperature, even if the latter model runs a bit faster at full speed...
For the last few days, now that most of my boxes support a 64-bit version of linux, I thought it was time to give a try to a x86_64 distribution. Being lazy as every sysadmin is supposed to be, reinstalling from scratch was not an option, and a more elegant way is to directly update the installed packages, with some yum/rpm incantations. The possibility for rpm to handle packages for different architectures in the same database, for the so-called multilib usage, has been very helpful in this migration. The only tricky part is the necessity to regenerate /var/lib/rpm/Packages for use with the 64-bit rpm command, using the rpmdb_dump and rpmdb_load helper commands.
Some post-upgrade work is however needed, depending on the software running on the box being migrated, for example the rrdtool databases are architecture-specific.
Intel finally released a firmware handling the TRIM command. This update is very welcome, because with this new command, the SDD can be informed by the operating system, when a disk block is released by the operating system. Having this information at the SSD firmware level is very important, because the SSD wear-levelling algorithm uses this free blocks pool to rewrite dirty blocks somewhere else on the drive, instead of rewriting them at the same place, which must be avoided with flash memory based drives. The unavailability of this TRIM command is the cause of the performance degrations, that can be seen with SSD drives, when they are used heavily for several days/weeks. The list of free blocks diminishes, until no more blocks are available.
The hdparm developers now provide an handy script, called wiper, that collects free blocks at the filesystem level, and feeds them to the SSD firmware, via a hdparm command. hdparm version 9.27 at least is required. This command restores the drives performance nearly to its factory settings, depending on the amount of stored data.
Tip: I had to reduce the max_sectors_kb to a drastically low value (echo 4 > /sys/block/sda/queue/max_sectors_kb) to force the SSD drive to accept the very long list of sectors to be trimmed.
Retrieving a bunch of small emails with a POP3 or IMAP client over a slow network can really be a pain, especially when the latency is high. Why ? Because the IMAP and the POP3 protocol dialog is often serialized between the client and the server, and each processed message requires to send and to receive at least three messages : ask for the header, wait for the reply, ask the the body, wait for the reply, ask to update the seen and deleted flag on the server, and wait for the reply. If the body of the message is small, most of the time is spent waiting for the replies from the server, while the network pipe remains idle. Not really optimal.
Recently, I discovered retchmail, that provides a nice speedup in POP3 retrieval, compared to fetchmail, that I used before. This program uses the WvStreams library, it pipelines and parallelizes most of the protocol dialog. Several POP3 servers can be queried simultaneously, and several messages are retrieved in parallel, so the network is busy most of the time.
The Huawei-E172 USB 3G+ key now works out-of-the box with NetworkManager as shipped with Fedora 9. All configuration stuff is handled by the edit connection menu, the PIN code is stored in the gnome keyring, and specific patch for the french SFR broadband network provider is no longer required.
This information is taken from the Vodafone Mobile Connect Card driver. You can change the preferred connection mode by sending those AT commands to the card :
'GPRSONLY' : 'AT^SYSCFG=13,1,3FFFFFFF,2,4' '3GONLY' : 'AT^SYSCFG=14,2,3FFFFFFF,2,4' 'GPRSPREF' : 'AT^SYSCFG=2,1,3FFFFFFF,2,4' '3GPREF' : 'AT^SYSCFG=2,2,3FFFFFFF,2,4'
43 older entries...