Sun Nov 7 11:22:22 CET 2010

Travel

As some of you have already heard, I'll be on the west coast of the US for three weeks, starting Monday (I'll arrive at 16:00 local time). The reason for my stay is work-related so it doesn't have to do with Gentoo. Still, I'll meet at least one fellow Gentoo dev (antarus).

What's more, I won't be able to do much arch work while I'm there. As per usual, armin76 is the guy to talk to regarding Alpha stuff. Hopefully, mattst88 will be a full Gentoo dev by the time I'm back

If you're interested in meeting me in or around SF between the 9th and 26th, feel free to drop me a line. While I will have to work during the week, I'll mostly be free on the weekends - and, hopefully, in the evenings.


Posted by klausman | Permanent Link | Categories: Community
Comment by mail to blog@ this domain. Why?

Tue Sep 14 21:16:36 CEST 2010

Ferd

Feedback to my las post was mostly positive, so here it is, Ferd v0.3 (homepage)TGZ

Posted by klausman | Permanent Link
Comment by mail to blog@ this domain. Why?

Wed Sep 8 21:17:24 CEST 2010

Gauging interest

I do a fair amount of stuff using RRDTool. When developing a new thing to store in and graph with RRD, it often happens that you get data that is wrong. On top of that, rollover (aka cycling or flapping) can introduce artifacts, even though RRDTool is good at detecting those. Since I don't like editing XML files by hand, I wrote a tool to help me edit out data that I know is wrong. It's not polished enough to release and getting it ready is a fair amount of work. So here's my question: is there reasonable interest in it? Do you pine for such a tool? If yes, send me a message/comment!

Posted by klausman | Permanent Link | Categories: Software
Comment by mail to blog@ this domain. Why?

Mon Aug 23 13:55:09 CEST 2010

Testing TLS and SSL services

Testing an SSL service or TLS enabled service isn't all that easy using common techniques: using telnet or netcat you can only see that something answers on a given port. However, you'll get no useful information regarding the certificates in involved that way. OpenSSL and GNUTLS provide tools to help with that.

Testing SSL servers (in this example, https)

OpenSSL has the function s_client which can be used to get the information we want:

$ openssl s_client -connect www.ccc.de:443
CONNECTED(00000003)
depth=0 C = DE, ST = Hamburg, L = Hamburg, O = Chaos Computer Club...
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 C = DE, ST = Hamburg, L = Hamburg, O = Chaos Computer Club...
verify error:num=27:certificate not trusted
verify return:1
depth=0 C = DE, ST = Hamburg, L = Hamburg, O = Chaos Computer Club...
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
 0 s:/C=DE/ST=Hamburg/L=Hamburg/O=Chaos Computer Club e.V./CN=...
   i:/O=CAcert Inc./OU=http://www.CAcert.org/CN=CAcert Class 3 Root
---
Server certificate
-----BEGIN CERTIFICATE-----
< Certificate here >
-----END CERTIFICATE-----
subject=/C=DE/ST=Hamburg/L=Hamburg/O=Chaos Computer Club ...
issuer=/O=CAcert Inc./OU=http://www.CAcert.org/CN=CAcert Class 3 Root
---
No client certificate CA names sent
---
SSL handshake has read 1839 bytes and written 409 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: AF312E52F8D5F2B6441436F0D666588FD4C5...
    Session-ID-ctx:
    Master-Key: FF103B4A274E8B90E905AA07C6DB45AB3F32...
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    TLS session ticket:
        < hexdump here >
    Start Time: 1282561578
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate)
---

Now you're on a secured connection and can use HTTP as you're used to (i.e. GET, POST etc). Not that the output above also allows you to find out what certificates are in use, who issued them and when they will expire.

Testing TLS enabled services (here: SMTP+TLS)

While TLS can operate in the same way that SSL does (encryption starting with the very first bit transmitted), it also has a mode in which you first establish the connection and then secure it later on. In the case of SMTP, this happens after issuing EHLO bys sending "STARTTLS" (if the server advertises it in response to EHLO).

$ gnutls-cli -s mx.example.com -p 25
Resolving 'mx.example.com'...
Connecting to '192.168.0.1:25'...

- Simple Client Mode:

220 mx.example.com ESMTP Exim 4.69 Mon, 04 Aug 2008 11:59:34 +0200
EHLO client.example.com
250-mx.example.com
250-SIZE 52428800
250-PIPELINING
250-AUTH CRAM-MD5
250-STARTTLS
250 HELP
STARTTLS
220 TLS go ahead

At this point, the server expects the client (us) to initiate the TLS handshake. Since the GNUTLS client itself doesn't know this (and can't, because every protocol is different), we have to tell it to go into TLS mode. This is accomplished by sending it the ALARM signal (SIGALRM, usually #14). This can be done easily from another terminal:

$ pkill -ALRM gnutls-cli

As a result, the client does all the crypto stuff required and tells us which certificates it has encountered along the way:

*** Starting TLS handshake
- Ephemeral Diffie-Hellman parameters
 - Using prime: 776 bits
 - Secret key: 760 bits
 - Peer's public key: 768 bits
- Certificate type: X.509
 - Got a certificate list of 1 certificates.

 - Certificate[0] info:
 # The hostname in the certificate matches 'mx.example.com'.
 # valid since: Fri Jan 18 09:22:21 CET 2008
 # expires at: Wed Jul 16 10:22:21 CEST 2008
 # fingerprint: A6:8E:2F:1B:03:36:A8:BA:CF:9F:37:0E:53:7E:D0:4A
 # Subject's DN: CN=mx.example.com
 # Issuer's DN: O=Root CA,OU=http://www.cacert.org,CN=CA ...

- Peer's certificate issuer is unknown
- Peer's certificate is NOT trusted
- Version: TLS1.0
- Key Exchange: DHE-RSA
- Cipher: AES-128-CBC
- MAC: SHA1
- Compression: NULL

As with SSL, you now have a secured connection that you can use like any other SMTP connection (or exit):

ETRN
250 OK
QUIT
221 'mx.example.com' closing connection
- Peer has closed the GNUTLS connection
$

Security note

While both clients check the certificate for internal validity and expiry date, they can't do anything in the way of certificate chain validation unless you provide appropriate root certificates). This means that while they tell you the contents of the certificate, they usually can't tell you whether the certificate's signature is trusted in any sense. Hence, they tell you that they're not trusted. SSL does this with the line verify error:num=27:certificate not trusted, TLS does so with Peer's certificate is NOT trusted.


Posted by klausman | Permanent Link | Categories: Tools of the trade, Security, Software
Comment by mail to blog@ this domain. Why?

Sun Mar 28 13:24:46 CEST 2010

Restoring and Disaster Recovery

I know I'm late with this post, but there's a lot going here currently. So, without further delay, on to my Backup Awareness Day post. Lets' talk about restoring and disaster recovery.

I guess it's one of the oldest cautionary tales in the context of backups: some company has an involved, well-thought out plan for backups and they're done regularly, probably at quite some expense and investment time-wise. Then comes the day that the backup is needed and to their horror, the company discovers that while the backup ran, the tapes are empty for some obscure reason.

This illustrates that it's not enough to just make backups; you also need to be able to run a restore. Because that last part depends on other measures beyond simple backups, we'll also take a small look at disaster recovery in general. While not everybody needs a disaster recovery plan in the same detail as a company, it's a good idea to think about the "What if" scenarios regarding your backups.

One of the biggest problems for restores is the problem of aging media. This actually a twofold problem. First, the media you make backups on will age, eventually losing information. Some are worse than others (magnetic tape and optical media like CD-Rs and DVDs), some are better (for example hard disks). On top of the actual medium degrading, you have to keep around a method of actually reading them. Again, tapes are probably worst in this regard, because they rely on mechanically complex tape drives that contain parts that age quickly (like belts and gears made of plastic). Hard disks are a bit better in this regard, since their interfaces don't age so quickly (a modern computer can still read a SCSI disk from twenty years ago, provided the disk itself is okay).

These thoughts might sound a bit far-fetched — you need last week's backup, not stuff from a decade ago. While this is true for the home user, it illustrates an important point when thinking about restore and recovery: do you have the means of copying back your data onto a working computer? Say your flat burns down completely but your data survives on the media you keep at the workplace. How hard is it to find a reader for your tapes? Do you have all the cables to connect the backup disk to your new computer? If you use encrypted media, is there a "spare key" in a secure location in case you forget yours?

As an outlook, also consider what happens if you lose all your data and programs. Will you be able to get a copy of the backup software you used? What about the software you handle your data with? Even if you can recover every single bit onto a new computer, it's just blobs of data if no program can read your Deluxe Paint HAM mode graphics. Using open formats is a good idea (not just because of the issues illustrated above).

Finally, let's look at having spare hardware. Some people build so-called multipath disk setups. What this means is that not only you have a RAID-1 (mirroring data on two disks, so you have a copy should a disk die completely), but also two interfaces to all disks, for example an on-board and an add-on card controller. Some modern SATA disks are prepared for this (being dual-ported) and classic SCSI disks have supported this for several years. This way, even if a controller dies, you can still get at your data (given the OS supports it).

This may sound like a great idea, but it has a big problem: if a controller dies, it will usually crash the whole machine. So your service level will get a dent anyway. Why not just put the spare hardware on a shelf and use it only if it's needed? There's one caveat here: test your spares regularly — once a year or so. Otherwise, Mr. Murphy will make sure that they're all dead when you need them most.

Testing your spare brings me to the final point about recovery you should think about: competence. It's all good that your automated backup works wonders for you and you needn't lift a finger, but you must be capable of doing a manual restore. The best way to do this is to once in a while run a restore on a spare disk or separate computer. That way, not only will you find out if you still know all the details (and passwords) but you'll also see if your backup is working at all — and if all your important data is actually included in the backup.

For bigger setups (read: companies), far more involved disaster recovery plans can be made, including, but not limited to: replacing all hardware within 12 hours; transferring terabytes of data from an off-site location in a short amount of time; having a sister back-up site to reduce customer-visible downtime; training of employees in disaster recovery and a whole load of other gritty details. In my experience, very few companies actually have anything resembling a plan when worst comes to worst. Most home users have better setups :)


Posted by klausman | Permanent Link | Categories: Software, Community
Comment by mail to blog@ this domain. Why?

Wed Feb 24 20:31:15 CET 2010

On Backups

Since today is Backup Awareness Day, I thought I'd write about how I handle backups. In contrast to most articles, I won't talk about the gritty details of making backups but more about what and where I do backups. Note that this does not include my workstation at work since that is matter of some complication (and not at all interesting).

I have four machines that are of importance to me: my workstation at home, my file server and router, my laptop and finally my co-located server. On each of them, there are different things I backup in different ways.

Workstation

My workstation has two main clumps of data that need backup: my home directory (actually, only parts of it) and the system configuration. It's also a nice example of what kinds of data need backup.

Home directory

My home directory I backup by using rsnapshot and keeping the snapshots on a different disk that is only ever mounted when backups are done - that way, even an accidental rm -rf / won't kill it. These backups I run daily at least (sometimes more often) and I keep about a week of dailies and two months of weeklies. Since rsnapshot makes highly efficient incremental backups, this doesn't actually use up much in the way of disk space. Another very important thing to do with home directories is to prune all the stuff that isn't important. Here are a few examples:

  • .mozilla/firefox/*/Cache/
  • tmp/
  • downloads/
  • .googleearth/Cache/

A few more directories that are specific to my setup are: src (where I keep source code of downloaded programs, but not my own stuff or anything I've patched), data/bg (my wallpaper collection; it's synced between my laptop and my workstation, so there already is a backup of sorts), debug/ (a directory where I keep unpacked sources with debug info on which I work).

I think this illustrates what kind of data you can dismiss. One important thing to keep in mind is that the size of files has very little relation to their importance: I don't mind losing the most recent git checkout of Linus' tree, but losing all the small dotfiles in my home directory would be very, very nasty.

System configuration

The first thing that pops into mind here is, naturally, /etc, where all of the system configuration files live (theoretically). Due to the way I work with these files (and their importance), I keep them in a version control system, the main repository of which I keep on my co-located server. Note that some VCSs don't handle file ownership and access rights very well, but there are solutions for that. Since there are files in /etc I never touch (yet might get updates for by way of distribution upgrades), I don't keep all of them in the repository but only those I've touched.

As I said earlier, most system configuration lives in /etc - but not all. For one, some software lives below /opt, including their configuration files. Another place where configuration files might end up is below /usr/local. In my case, /opt contains nothing of value, so I don't make backups of it. I do have a few scripts I've written that are in /usr/local/{bin,sbin}. I keep a copy of all those in my home directory (in mysrc/) so they're take care of. Note that my script for making backups of /home includes commands that copy over all the stuff from /usr/local so I don't have to do it by hand.

Finally, some system state lives in /var. In my case, this is the place where my distribution keeps track of installed packages. If I have to set up my machine from scratch, that list is handy, so I can just use it as a reference of things to install. Depending on the distribution used, other directories might warrant a backup.

File server

My file server has (again) two things to make backups of: the file storage (under /store) and the system setup. Since the latter works exactly the same way as on my workstation, I'll skip it here.

The main problem with backups of file servers is their simply size. In my case, /store is 1 terabyte and I don't have another storage location with the same size. I might simply add a 1T disk to the system and be done - but back when I built the machine, that simply was out of the question financially, so what to do?

The data on my file server can be easily classified by "needed often" and "needed seldom". Once a month, a cron job reminds me to move the latter off the machine (usually to external disks and/or DVDs). A handy way to handle "did I use this?" might be the atime record of the file system. For me, that hasn't worked well in the past, so I just jog my memory - and it's a file server for two users, so it's not a problem. Pruning old stuff not only results in having copies, it also makes the space last longer, since I more often get rid of old stuff that's just eating disk space. A note on backing up to optical media: I back up every file to two different media and I always include the checksums of all files, so I can be sure upon restore that the file is ok. Things that have only live on optical media for some time (typically, more than a year) are copied to new media once every 2-3 years if they're still important.

In the future I might move to using separate non-RAID disks for backups, but the whole file server setup is in flux, so I don't have anything solid, yet.

Laptop

I use my laptop seldom and I don't have a large working set of data on it. Usually, it's a glorified terminal to access my workstation at home, at work or my co-located server. As a result, backing up my home directory as often as I do on my workstation just makes no sense. Also, my laptop may see weeks of downtime on end, so anything daily (or even weekly) does not make much sense. Hence, I have used anacron to remind me every once in a while to back up the important things.

As for system configuration, the same things mentioned above apply. This works well since I won't change /etc if I don't use the laptop.

Co-located server

My co-located server has the same set of things to back up as most of my other machines: home directories, system configuration and a little bit of extra storage. Actually, strike the last one. I deliberately make no effort to back that up (and my users know it). As for the system configuration, again, the important stuff is kept in a VCS.

The interesting bits about this machine are home directories, the VCS repositories from the other three machines mentioned above, the web content and the MySQL databases I host (mostly for web sites).

For the home directories, I don't guarantee anything to my users. I backup my own directory to my file server at home. I can't do this as often as I'd like to since it's hard on bandwidth and the file server is behind a meager DSL line, but better to have at least some backup that's restorable. Since my mail system puts user mail into their home directories, backups of /home include all mail. I specifically tell my users that I don't backup their home directories, so they're on their own.

The web content I handle exactly the same way as home directories: regular rsnapshots locally, copied over to the file server. I have excluded a few directories that contain big binaries that are merely copies of stuff found elsewhere.

The VCS repositories are backed up locally by the usual backup methods for such systems. I keep a week of backups that are made every six hours. For the MySQL databases, I make a backup every three hours, look if it differs from the one before and discard it if it doesn't. I keep a week of those, too.

These local backups are rsynced once a day to the file server again (early in the morning so as to not influence normal surfing at home.

My co-location offers backup solutions, too, and I recommend using them if you can afford them. Since I've always had my own backup-to-file server solution, I never bothered to set it up.

Conclusion

As you can see from the sheer length of this article, I've put quite some thought into my backups. I'm also an IT professional so I tend to do everything myself (we geeks are weird that way). I'm sure for the less technically inclined, there are easier "turn-key" solutions than mine.

Still, my main point of this article isn't talking about the exact mechanisms used but rather, what kind of things are possible. Also, I hope to have illustrated some of the pitfalls of backups, like making them too seldom or making them in the wrong place. Naturally, knowing what data you will need in the event of a catastrophe is important, too. Yes, your irreplaceable stuff is important, but not having to sink weeks into getting back all of your personal setup is nice to have, too. Significant data loss is almost always going to make a serious dent in your productivity - try to minimize it.

In essence: be aware what your important data is (and what isn't), be aware what a good place is to put backups (a distance away, but not too far) and be aware of the way your restore procedure works. The last bit I'll talk about in more depth in a future BAD post (nice acronym, isn't it?).


Posted by klausman | Permanent Link | Categories: Software, Community
Comment by mail to blog@ this domain. Why?

Sun Dec 6 14:42:25 CET 2009

udev and Titans

Here's a quick rundown on what's going on in alpha land (i.e. what's currently broken):

First of all, starting with version 147, sys-fs/udev uses a syscall from inotify (inotify_init1) which isn't implemented on alpha. For now, everything past version 146 (i.e. 147, 148, 149) is hardmasked because it would result in a system without any devices (including network, very annoying!). If any of you is keen on implementing the missing syscall for alpha, you'd have my gratitude (payable in a cold beverage of your choice). I'll try my hand at it, but it's not my strong suit and I have a myriad of other things on my alpha plate.

Martin Schmidt has recently acquired a HP DS15 which is one of the fastest alpha workstations ever produced. Naturally, he's trying to install Gentoo Linux on it. Unfortunately, the system type (Titan) needs a kernel option (LEGACY_START_ADDRESS) unset which in turn results in a non-compiling kernel. The root cause is a change of 64-bit data types that happened a while ago and wasn't carried out completely. Matt has hacked up a quick patch which Martin will try out. Naturally, help is appreciated here, too.


Posted by klausman | Permanent Link | Categories: Hardware, Software, Community
Comment by mail to blog@ this domain. Why?

Sat Oct 24 17:33:23 CEST 2009

Working together

Recently, I've noticed some behaviour by package maintainers that really annoys me. I'm talking about the way stabilization requests are made. Normally, a package maintainer opens a stabilization request bug (STABLEREQ) detailing which version(s) of which package(s) he wants the arch teams to test and stabilize.

Another closely related request which suffers from the same problems I'll detail below are keyword requests. Those are pretty much the same as STABLEREQ, but for "~arch" instead of "arch". Also, the testing required usually is not as strict as that for STABLEREQ for obvious reasons.

For simple packages, neither usually causes problems. For complex packages, this may mean that dependencies need testing and keywording, in some cases five to ten packages on top of the one requested. Unfortunately, some package maintainers have taken up the habit of just dumping the request for their package in bugzilla without checking what dependencies might be needed. Checking the dependencies also involves which versions of the dependencies actually work, which ones are stable (yes, this might mean talking to other package maintainers!).

Another related gripe I have is being pushy with time frames when stuff should be tested and stable and when trouble comes up (test suite fails etc), completely ignoring the bug report the arch team files for half a year or longer.

This kind of added workload (of rather dreary work, to boot) is what makes arch testing so tedious sometimes. Not to mention the burn-out it causes. Not getting any positive feedback (from either users or other devs) doesn't help, either.

Guess I'm turning into a grumpy old dev. But still, try to be a bit nicer to the arch testers, mkay?

PS: Note that there are very positive counterexamples, too: the emacs guys always provide test plans, the security guys are always nice to work with, too. And of course several individuals who are just nice to work with.


Posted by klausman | Permanent Link | Categories: Community
Comment by mail to blog@ this domain. Why?

Sun Jun 28 13:54:50 CEST 2009

New stuff, good stuff

Recently, a few larger packages on alpha have seen major updates. There are caveats here and there. Thus, I'll point them out in this post.

xorg-x11-7.4 and xorg-server-1.5

After long last, we've stabilized xorg-server-1.5 and xorg-x11-7.4 (plus their dependencies, naturally). The largest blocker for this was the lack of kernel support for PCI accesses on alpha. Since 2.6.30 fixed that and due to the pressure from the X11 guys to get rid of old versions (and sundry other packages), we moved to 2.6.30 as stable vanilla kernel and .29-r5 as stable gentoo-sources.

However, not everything is peachy. The glint driver, which is used for the Permedia cards that came with many earlier PCI-only Alphas (like the 500au and XP1000) does not work. There's a bug report (freedesktop bug #21546, referenced from g.o bug #268626) but no X11 dev has seen fit to react to it. I've been told that the failure is the result of a changing API and a lack of update to the glint driver. If you do use such a card and want to use recent X11 I suggest bugging the X11 people about it; ideally, via the report mentioned above.

Regarding Kernel 2.6.30, there have been reports of SMP compile failures on LKML, but I have been unable to reproduce those. Feel free to bug me if you encounter them.

glibc 2.9

A few minutes ago, I stabilized glibc-2.9_p20081201-r2 for Alpha. It has been running on all of my alphas for a while now, without any noticeable flukes (besides one, which I'll detail below). Armin76 has rebuilt his testing chroot with it and found no errors either, so I'm quite sure this'll work out nicely.

The one caveat isn't a bug in glibc per se but actually one in Netfilter. Under certain circumstances, a race condition can be triggered in Netfilter which results in a silently dropped DNS packet which in turn results in a timeout when connecting. I noticed this a few months ago and hunted it down.

The main ingredients are: a machine running a glibc of the 2.9 series, an SMP Netfilter machine and low latency between the two. What happens is that newer glibc resolvers rapid-fire the two requests (ipv4 and ipv6/A and AAAA) that are triggered by gethostbyname(). Due to the specifics this sometimes triggers a race condition in the Netfilter code it traverses on your router/firewall. The easiest solution right now is to add "options single-request" to your resolv.conf. This will make lookups slower but not perceptibly so unless you do a really large number of them per second. If you occasionally experience delays when connecting (typically with SSH) and it's not a delay on the other side, you might want to try the fix described above. Details on the bug can be found in my post to net-devel.

The fix to Netfilter is very much the opposite of trivial, I am told. And even if it were fixed tomorrow, it would take quite a while to phase out the old code.


Posted by klausman | Permanent Link | Categories: Software
Comment by mail to blog@ this domain. Why?

Sun Jun 7 12:49:32 CEST 2009

Stable relationships

Recently, there were a few "out of the blue" requests for stabilization of smaller apps on alpha. Deciding whether to actually do that isn't quite as easy as it may seem.

One thing that currently plagues me as the Alpha boss (of a team of one and a half...) most is lack of manpower. My personal goal is to keep the number of open bugs for alpha below 30, ideally below 20. Currently, we're at about 50. Reaching 30 is possible, keeping it that way is strenuous because it means that I spend at least 4-6 hours a week only doing stabilization work. Since I have other hobbies, too and possibly even a social life, staying below the 30 bugs mark happens less often than I'd like. On top of this, if I get sick or a major stabilization blob (like a new KDE version or the whole Xorg-1.5 business) comes along, it gets even worse.

So two things I want to point out in this article: one: arch testing and you, two: reasons for stabilization of packages.

The first is easy: if you use Gentoo/Alpha and have at least two neurons to bang together, you can help out. Just look at the open bugs on which alpha is CC'd (Feed) and see if you can help. Stabilization bugs are rather easy: test the package (ideally with most of its USE-flags enabled and trying to run its test suite) and report back. In the comment, tell us what you tested (and how) and paste in the emerge --info output of your alpha machine. If you run into trouble (compilation fails etc.) please also comment on the bug in the usual bug-reporting manner. The same goes for keywording bugs, but tests needn't be quite as thorough. As for the "real bugs", i.e. the stuff that describes a failure on alpha that needs to be fixed, feel free to debug them, send patches etc. If you have the skills to do this, you know how it works. You might even want to become an official arch tester. It's easier than it sounds. Just drop me or armin76 a line on #gentoo-alpha/Freenode.

Second: What do we keyword and stabilize? The simple answer: everything that is needed and possible. "Needed" means there actually is somebody who uses the thing. Yes, one single person is enough if the second part of the conditions is met: it's possible. While there simply is no really working Java for Alpha right now, this also touches on the workload aspect of keywording and stabilization. As I explained above, the workforce is already spread rather thinly. Every additional package, no matter how tiny, adds to that workload. So if somebody is comfy with just adding a package to their local package.keywords and then use it, go ahead. Another factor to keep in mind is the amount of specialist knowledge needed to test and maintain (as an alpha arch team) the package. Maintaining, say, app-misc/mmv is way easier than for example sci-libs/hdf5. As such, we're always happy to have users that help with those special packages. If you don't know how to help, just ask on IRC or by mail and we'll work something out.


Posted by klausman | Permanent Link | Categories: Tools of the trade, Software, Community
Comment by mail to blog@ this domain. Why?