You are viewing lotso

Previous 15

Apr. 5th, 2009

Postgresql 8.4 -> Where are On Disk Bitmap Indexes?

Postgresql 8.4 is nearly out. There's quite a few things which looks interesting to me. However, the one thing which I'm still missing and am not able to find the status of is where or what happened to the On-Disk-Bitmap-Indexes which was supposed to come out for the 8.4 release.

Anyone from the Postgreql SQL Team would be privy to that info? can't really seem to find it on google.

Tags: ,

Nov. 9th, 2008

FC10 VPN Setup to Windows PPTP using Network Manager

I was going through this for a couple of hours as I was trying to configure FC10 (moved from gentoo [for now]) and NetworkManager to connect to my office's VPN server running on Windows(R) ISA server.

I tried a variety of methods and configurations but I keep getting errors of the sort

Dec 26 15:02:00 localhost pppd[5483]: Plugin /usr/lib/pppd/2.4.4/ loaded.
Dec 26 15:02:00 localhost pppd[5483]: pppd 2.4.4 started by root, uid 0
Dec 26 15:02:00 localhost pptp[5484]: nm-pptp-service-5480 log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Dec 26 15:02:03 localhost pptp[5493]: nm-pptp-service-5480 warn[ctrlp_disp:pptp_ctrl.c:956]: Non-zero Async Control Character Maps are not supported!
Dec 26 15:02:09 localhost pppd[5483]: MS-CHAP authentication failed: E=691 Authentication failure
Dec 26 15:02:09 localhost pppd[5483]: CHAP authentication failed
Dec 26 15:02:10 localhost pppd[5483]: Connection terminated.

Authentication errors, but the issue was WHY?

Ends up, and this may not apply to you, but it definitely applies to me.

My office is using Windows ISA server (200x version I would think)

So, in Network Manager, there are 3 option boxes


so, I happily added

Username : lotso
Password : lotso's password
domain : lotso's windows domain

ended up I get those errors above.!!

2 hours later and MUCH googling

I tried
Username : lotso's windows domain\lotso
password : lotso's password
domain :

and it WORKED!!


on the other hand, this is _not_ a bug w/ NM or PPTP, seems like per digitalwound, his one works fine w/ those 3 options

Foss,My 2008 Pictures

nowadays, the blogs on OSS is really little for my part.

Here's some pics instead

Aug. 12th, 2008

python-2.4 to python 2.5 upgrade hell

It's been a while since the last post.

Not much been happening except that I'm missing out on life due to work commitments which seriously sucks.

In any case, been battling with the python upgrade on my gentoo box at home.

I've been having undefined symbol issues with pygtk and pygobject all through last week and I just solved my issue like 10 min ago and thus I can go to sleep already.

Main issue is the borking when I was compiling a python app which needed gtk support. Only thing, it, it also needed "threads" support which, for some reason, is not default turned on in Python2.5 but is on python2.4!

Hence, I spent the last week pulling out hairs trying all different permutations of WHY did it USED to work and doesn't now.

it ended up as a simple USE flag which was not checked during the ebuild checks.

Why.. Oh.. Why...
Tags: ,

Jun. 23rd, 2008

Automatic Raid Array Rebuilding

Hi guys, long time no post. Last post was at March and it's now already June.

Been busy as usual, however, not been dabbling as much as I "should" as I've been busy with other NON-FOSS related stuffs. (psst: I'm now heavily into photography. Went to shoot some Japan GT queens!! Kawaaiii)

Anyway, since this is a (nearly) purely an FOSS based blog, I'm gonna talk about my automatic Raid Rebuilding script.

You see, what happens is this, my postgresql box, (celeron 2x500GB in Raid 1) has a tendency to keep dieing once in a while for X reasons. (I have till now, been unable to locate the reason why it's dieing so often) I've tried to the write-all, read-all using dd but thus far, has not seen errors being thrown out. So, it's been a manual instance of...

go to work. see the email : Your raid has Died!
log onto the box, do the rebuild.

After a while, this just becomes tiring and I decided to fsck it and make it automatic.

Here's the script


FAIL_DRV=`mdadm --detail /dev/md0 | grep faulty | awk '{print $6}'`

if [ -n "$FAIL_DRV" ]
  echo "Detected degraded array : $FAIL_DRV"
  echo "Starting automated array rebuild process"
  mdadm /dev/md0 --fail $FAIL_DRV --remove $FAIL_DRV --add $FAIL_DRV
  echo "Nothing to do"

Simple eh..

So, now I don't have to come to work to see it all wonky because it'll automatically rebuild itself.

Some of you may ask, how come I don't just replace the drive? Because I can't find any replacement drive which is a PATA connection and at 500GB capacity! The largest I can find are 160GB.

Tags: ,

Apr. 5th, 2008

GVFS makes me happy

Did you know that nautilus is now integrated with the new GVFS (gentooGnome virtual filesystem) from the older gnomeVFS module?

The new one is partially built on top of fuse, or rather integrates with fuse and it makes mounting and accessing files from network shares a much better experience than it was previously.

GVFS, with nautilus, when you browse to a share, it’s automatically mounted under ~/.gvfs

gvfs-fuse-daemon on /home/gentoo/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=gentoo)

~/.gvfs $ ls -al
total 12
dr-x------   3 gentoo users     0 Apr  5 03:38 .
drwx------ 118 gentoo users 12288 Apr  5 04:37 ..
drwx------   1 gentoo users     0 Feb  1 21:20 mediacenter on

and thus, you can actually use applications to access those items under those mount points.

for example, one of the reasons I rely heavily on totem and nautilus is because of its SMB support. Whereby I can connect to my home mediacenter server and stream videos from it without going through the motions of actually mounting the drive/shares.

(did you know that only totem and nautilus share this feature in gnome and the rest of the programs are brain-dead in this regard?)

Kinks :
Currently, based on my limited testing (it’s 4+am as I write this as I was playing with GVFS+Nautilus 2.22), there are bugs when you try to access a smb share which is password protected. For X reasons, the usual nautilus “password verification” box does not come up.

And putting smb://user:password@server/share does not work either.

But if you were to drop down to the CLI and do a

$ gvfs-mount smb://gentoo@ required for share storage on
Domain [HOME.NET]:

then that would work and you’ll get the mountpoint in ~/.gvfs and you can access files from that location.
It’s an additional step, but hey, at least now it’s

1. transparent (to an extent) and most apps can see it (tried mplayer/gmplayer/xine)
2. FAST. It’s pretty much faster than the previous incarnation of using smb protocol through nautilus. (I don’t have any real stats)
3. Have a tendency to crash.

Mar. 6th, 2008

Steps to secure your site

So, in today’s lesson I will elaborate on how 1 site decides to put additional “protection” towards phishing or in a more general term, how to secure your site against malware or other badwares.

1. Open an account with RHBBank (
2. Subscribe to internet banking
3. Go Overseas
4. Attempt to pay your credit card fees etc via internet
5. Pull hairs in attempts

So basically, I’ve been trying to access to RHBbank’s secure site ( and keep getting either permission denied or server errors or something along those lines.

So, in an off-hunch, I tunnelled to my home squid proxy server and used that as the proxy for firefox. I fired up the browser and was greeted with the RHB secure page!!

Open up opera, (normal settings) and fire up the same page and “internal server error”

So, either one of two things is happening.

1. RHB is looking at IP addresses and denying access to anyone out of M’sia IP address range
2. My Company’s outgoing filter regards RHBbank as malware etc and prohibits me to visit it.

funny business.

Feb. 28th, 2008

Open Source Trend - M'sia 2nd in line

I just noticed this from google trends.

Feb. 5th, 2008


I’m in San Jose. Still pondering if I can make it to the Local PUG (postgresql user group) meeting to be held on Feb 12 since I’m here.

Will get the chance to meet David Fetter and team.

I’ll see what happens.

PS : I freaking hate it here this time of year. It’s cold and so are my fingers! I need to constantly rub my hands together
Tags: ,

Jan. 27th, 2008

You say Lemon, I say Lemonade (A story)

The past few weeks was not all that great as in addition to facing additional challenges at my primary day job, I also had to deal with my pet project in my day job to help smoothen my day job’s activities.

Some of you may know that my pet project involves pulling gobs of data into a PG instance to make my own version of a company datamart. I’m not talking about small gobs of data, but more towards in the range of 200+GB (It was more, but in one of the efforts to control/tune the server, I deleted close to 2-3 month’s worth of data.)

200+GB may not seem like much to you guys who gets to play with some real iron hardware or some “real” server hardware. All I had was just a Celeron 1.7G w/ 768MB of ram and some Gobs of IDE 7200 RPM drives. In short, all I had was lemons and I needed to make the best of it!

Actually, all was working fine and dandy up until I decided to make a slave server using Slony-I + PGpool and while that was a good decision, the involved hardware was the same if not worst(512MB ram only). When I started to implement that, I was faced with 2 issues.

1. Replication would lag behind by up to a day or so waiting for the next sync (dreaded fetch 100 from log) was taking to long.
2. My nightly vacuum job went from an average of 4+ hours to like 27+ hours.

So, in a effort to get things under control, I went through a few paths and hit more than my share of stumbling blocks. One of the things which I tried was to reduce the amount of “current” data in a particular table from 1 month -> 2 weeks -> 1 week (and move them into a so-called archive table but still in the same tablespace). This didn’t really bode well, as I initially tried to move the data in like 3 hourly chunks, which failed and to 1 hour chunks and then finally to 15 minutes chunks.

But in the end, it was all really futile because what i was essentially doing was just generate more and more IO activity (and that’s not a good thing). In addition to that, I also had to deal with vacuuming the tables due to PG’s MVCC feature and that was also not fun.

So, in the end, I broke my 3x500GB Raid 1 mirror (1 spare disk) and used the spare as the Slony-I log partition. Initially, that wasn’t all I did, I also included the 2 main problematic table, moving it from the main raid1 tablespace into that 1 disk tablespace. (that was also a mistake) and it didn’t help at all. IO activity was still high and I wasn’t able to solve my vacuuming process as wel.

Time for another plan.

This time around, what i did was to move the 2 big tables back into the raid1 tablespace and left the slony logs in the single disk. In addition to that, I also made a few alterations to the manner in which I pull data from the main MSSQL database and the way it was inserted into PG.

This time around, I’m utilising partitioning and some additional pgagent rules to automatically switch into a new table every 7 days and in doing so, I also had to change a few more other items to get things to work smoothly. I did this last Friday and based on the emailed logs, I think I’ve made a good decision as right now, everything seems peachy with the vacuum back to ~4 hours and there’s also no lag in the Slony replication.

I still hav another thing to do which is to alter the script I use to pull from the main Db as I’m being kicked (requested) to pull from an alternate DB which has a slightly different architecture.

2 disk Raid1 is definitely MUCH better than a single disk tablespace. With the amount of read/write activity that i have, it’s just not doable.

So, that’s how I made lemonade with my lemons. (hmm.. does this sound right?)
Tags: ,

Jan. 12th, 2008

Postgresql 8.3 Features I'm looking forward to

PG 8.3 is coming along soon. (although I read from Bruce M that there's likely to be RC2 coming out).

In any case, I looked through the pgwiki and there looks like only 2 features which I'm looking forward to.

  • HOT
  • Create table like including indexes (although right now, this is being automated via a stored procedure/function)

The other thing which is nice, but not absolutely necessary is the multiple Autovacuum worker feature. My concern is largely on the few very large tables which I used to have. (I've since sliced it down to partitions by date ranges to keep it manageable. I initially just wanted to see how _much_ data it can cope with before my system** starts to bog down. BTW, It turned out to be approx 200 million rows, and Now I know)

Of late, the nightly vacuum has been taking a long time and this is in part, a fault of mine due to a design issue. I won't go so much into this, but know  that I need to relook into my current ETL implementation and where the data goes into the Db.

As of right now, I'm pulling data from a MSSQL server into PG to be made as a data-mart. My current process involves pulling from MSSQL into a table in PG. Unlike the usual method of making a partition, namely a master table w/o holding any data or insert directly into  the partition, I chose to insert into  the master table, and then, 1 week later (I started with 1 month then 2 weeks and ended up with 1 week's worth of current data in the master table) I start to offload data from the master table into the partition.

Master Table (1 wk data)

I was looking through my system's load and found that it's always on IO wait. Performing a vacuum on the large table after the data offload into the partition took quite a while due to

  1. The table is large
  2. The indexes are sometimes even larger than the table size
  3. The number of indexes in that table
  4. My usage of a concatenated prikey named as unique_id to simplify the loading process which ended up being a bad decision because I needed to create the same prikey (non-concatenated as an index) anyway to improve join performance. Hence, in some sense, i have double the amount to vacuum through. Bad. Bad. (David Fetter warned me of this but I chose to shoot myself in the foot anyway.)
So, I figured that by reducing the amount of data in that particular table, I could well reduce the amount of time being spent in vacuuming that particular table. (Note that I don't know how true is this hypothesis of mine, but I'm giving it a shot anyhow.)

Note: I'm looking forward to 8.4, which I don't really know when, but I'm hoping that by then, (on disk) bitmap indexes will be made available and my (multiple) indexes can be made to be smaller and more efficient. (up to 8 index on a table)

** : The system in question is a celeron 1.7G/768MB RAM and 2x500GB Raid 1 w/ ~250GB DB size
Tags: ,

Jan. 10th, 2008

The Doraemons Conversations

After waiting for such a long time and after the long wait to compile QT and also skype 2.0 (beta download) I finally gotten the el-cheapo webcam which I’ve gotten from nearly a year ago to work. (actually, I think it was longer than that, it was during version 1.3 IIRC of skype)


The driver used was the gscpav1 and for the webcam to work, you have to enable video4linux support in the kernel (which I didn’t bother to since skype doesn’t support video in the olden days anyway.)

So, after all that, I now have Skype for Linux working. (didn’t test sound though)

But I’m happy that I don’t need to spend RM65 to get another webcam like what colin did.


Jan. 6th, 2008

SQL - pgpool-II (Step 2)

So, this is step 2 to getting replication + load balancing to work for postgresql.

I've already detailed the 1st step to getting Slony to work in a previous blog. (that was on a development machine/vmware image. When I tried it on the production/slave server, I was faced with some issues which I might elaborate in another post. It all boils down myself shooting my own foot. What to do, it was a time when I wasn't connected to the internet and thus, no googling privileges.)

So, here are my experience with pgpool and it's also a little bit like shooting myself in the foot (again!)

First off, I started out with using the _wrong_ version of pgpool. The newest version of pgpool-II (note that -> pgpool-II and not pgpool-I) is 2.0.1 and the newest version of pgpool found on the yum mirrors (I'm using centos4/5) was 2.01 (well, the numbers match don't they?) The only different was the one available on the yum mirrors was that of pgpool-I and not pgpool-II. However, since documentation on pgpool were sparse (I googled everywhere, read all the relevant and NON-relevant mailing list and found nothing much to go on by.)

It was not until I signed up to the pgpool mailing list (which was very low volume by the way) and interacting with one of the Japanese developer did I find out that I was in-fact using the OLD version of pgpool which was pgpool-I which, unfortunately had the same version as pgpool-II!
(I even downloaded the tarball from pgfoundry[but I _did_ download the _correct_ tarball] and searched through the source to figure out what was happening.)

By that little(big!) mistake I did, I was tearing my hair out for the past 3+ weeks. (well, I didn't play with it everyday and in-between my dayjob and such....) However, I did get pgpool-I to work properly with a little tweaking and I could get load-balancing to work, albeit it was not as advertise as in I can't get it to work without it functioning as replication as well. (of sorts anyway, which was the reason I can't deploy it as I was using slony)

So, after I found out my mistake last friday, I started to google for a new RPM of pgpool-II (newest version 2.0.1) but was unable to locate it in any place. The latest RPM I could find was that of version 1.3 which was _too_ old in a sense. (It's always better to have the latest stable version) So, I had to engineer a way to get a RPM from the tarball. Luckily, the tarball from pgfoundry also contained the pgpool.spec file, which was packaged by Devrim. Unfortunately for my, the spec file was a little old in that it refered to the 2.0 beta1 version. It wasn't too much of an issue as all it needed was a little hack here and a little hack there. (I was getting bad owner/group permission error which I narrowed down to the .spec file not having valid user/groups.)

After that was done a rpmbuild -ba pgpool.spec and I got an RPM.

After that, I just installed it, configured the pgpool.conf and got it up and running as advertised with replication mode off, master slave mode on and load balancing mode on.

Cool.. I'm rolling this to production on Monday.

So, this means I'll have 1x Master (1.7G celeron/768MB ram, 500G Raid1 with ~200GB of data), 1xSlave (1.7G celeron/512MB ram 3x160GB raid0). I still have another box sitting under my desk which has even poorer specs than the above, but I think it'll work out just fine.

Cool..Ultra Cool Even!!

If anyone wants the RPM or the modified spec file, do drop me a line and I'll post it to you or something.
Tags: ,

Dec. 24th, 2007

SQL - Slony-I (step 1)

Been playing around with some level of replication for Postgresql. Like in all FOSS based software, there is lots of choices to choose from and that, in itself, though a blessing is also a curse. There’s just too many choices! (Both Foss and Non-Foss per se)

1. Sequoia
2. PgCluster
3. CyberCluster
4. Slony-I
5. PgPool
6. Skytools (this is skype)

and i believe the list goes on. In any case, my requirements are just 2 I think. (for now anyway)

i. Replicate only a subset of the tables. (not the entire db)
(AFAIK, pgcluster, while easier to configure is also an entire DB replication solution, which is not what I wanted)

ii. Connection load balancing to a few read-only slaves (for select queries only)

Hence, based on the overflowing amount of information of which option to choose, I finally arrived at using slony-I and pgpool and of the two options, I’ve (more or less) already completed the configuration of Slony-I.

For Slony-I, I made sure that I understood how to do the “old-style” which is by using the cli, before I moved on to doing the rest of the configuration using pgadmin which is way easier.

There are a few caveats when using Slony-I and I’ll list down my experiences when I’m playing with it using both gentoo and centos 4 (this is running in a VM)

1st off, version 1.2.12 is out from the slony-website but gentoo is still at 1.2.10. The easiest thing to do with this is just to hack the ebuild and change the version from 1.2.10--> 1.2.12 (gentoo bug #143600) and move it to /usr/local/portage.

So, in that sense, building on gentoo was relatively straightforward and less than 10 min job (excluding compilation)

But on centos, it’s another matters since there’s no default rpm supplied. Only a src rpm was supplied and not being too utterly familiar with it, (i’ve switched to using gentoo nearly 4/5 years ago as I hated fedora’s upgrade cycle and centos was “supposed” to be server-grade.)

In anycase, most of the caveats are when dealing with centos. For one, since this is a src.rpm, you have to compile it 1st.

Hence, you need these additional packages :

1. bison
2. flex
3. gcc (and all it’s dependencies)
4. rpm-build
5. postgresql-devel
6. docbook-style-dsssl
7. netpbm-progs (and netpbm dependency)
6. (there might be more as I didn’t document it)

Once you start compiling it, you’ll run into 1 error which is caused by the NAMELEN of the docs. (this is marked as bug #159382 and the solution is to either upgrade to centos 5 (supposed to be fixed by this release. Keyword = supposed) or to hack it. (I chose to hack it)

depending on where your docbook files are, you can do this
cd /usr/share/sgml && perl -pi.bak -e ‘s/(NAMELEN\s+)44/${1}256/’ ‘find . -type f |xargs grep ’NAMELEN.*44’|sed -e ‘s/:.*//’‘

So, after that is resolved, (which took 1-2 hours w/ scouring net etc.) Then move on to the experimenting stage. I used articles from these few locations :

slony-i official docs
WhoAmI’s Blog
OnLamp Article from 2005
Pgadmin Archives
Pgadmin Docs

Anyway, a few more caveats with the configuration is.

1. Ensure you use a .pgpass file for the passwords (chmod go-rwx ~/.pgpass)*:postgres:pguserpassword*:postgres:pguserpassword

2. Ensure that you use sane configs for your pg_hba.conf file (use trust/ident authentication 1st just in case, to ensure it’s not due to that if it’s not working)

3. ensure that the connection string used for slon/slonik also uses the “user=postgres” line.
(notice that this guide doesn’t have the user to connect as in the slonik shell script. This caused me some headache as I was getting both a password error as well as some “cannot connect admin node xxx issues)

4. Create the replication using either directly using shellscripts or using pgadmin3. (i followed both the examples from the pgadmin docs as well as the mail I found on the pgadmin mailing list - links provided above, with the exception that I didn’t make it 2 way as in slave<-->master but only master-->slave and slave-->master.)

5. starting the slon process is as simple as (I used a config file instead)
$cat > slon_master.conf
cluster_name = ‘pgcluster’
conn_info = ‘dbname=testcluster host= user=postgres’

$slon -d4 -f slon_master.conf

(-d4 to give lots of debug output)

on the Master DB
2007-12-24 01:04:12 MYT DEBUG2 syncThread: new sl_action_seq 1 - SYNC 217
2007-12-24 01:04:16 MYT DEBUG2 localListenThread: Received event 10,217 SYNC
2007-12-24 01:04:17 MYT DEBUG2 remoteListenThread_1: queue event 1,195 SYNC
2007-12-24 01:04:17 MYT DEBUG2 remoteListenThread_1: UNLISTEN
2007-12-24 01:04:22 MYT DEBUG2 syncThread: new sl_action_seq 1 - SYNC 218

on the Slave DB
2007-12-23 22:05:02 MYT DEBUG2 remoteWorkerThread_10: SYNC 227 processing
2007-12-23 22:05:02 MYT DEBUG2 remoteWorkerThread_10: no sets need syncing for this event
2007-12-23 22:05:04 MYT DEBUG2 remoteListenThread_10: queue event 10,228 SYNC
2007-12-23 22:05:04 MYT DEBUG2 remoteWorkerThread_10: Received event 10,228 SYNC
2007-12-23 22:05:04 MYT DEBUG3 calc sync size - last time: 1 last length: 2005 ideal: 29 proposed size: 3

6. BTW, there’s no such need to do a database dump and restore of the tables you want to be replicated. It’s as good to just create the schema w/o any data and start the slon processes. I learned that all my effort to dump and restore the replicated tables just ended up in the drain as slony-I will just truncate the table (this was a command I caught a glimpse of when slon started) and restart from scratch. (i really wonder if this is intended behaviour. What happens when the slon processes goes down? and it seems that it’s quite fragile, so I’ll have to look into that.)

Next up is to look at pg-pool. That’ll be another fun(?) thing to look at??

BTW, I’m looking to do the replication to another (low end celeron) box and perhaps just do a raid0 out of 3 drives for greater performance(?) and then pg-pool to load balance it to the raid0 box.

Build performance and redundancy through multiple un-reliablie boxes eh? The google philosophy.
I’ve got a few low end boxes lying around in the office which can be put to use I suspect.
Tags: ,

Dec. 19th, 2007

Gnome-2.20 - Totem Backend Changed to Gstreamer (By Default)

This sucks.. it’s 3am and I’m battling with Gnome-2.20 and the new Totem gstreamer backend which is refusing to play nice with RMVB files. (actually, I think it is unable to handle any codecs which is not supported by gstreamer - which is also why there’s the pitfdll plugin - which is not in gentoo’s portage by the way)

I really like totem as it integrates nicely with gnome (in general) and not to mention that it also is able to play/stream from a smb share unlike the rest of gnome and linux (in general and in my experience anyway). All other options, one has to mount the smb share into linux, or copy the entire file into the system before one can really play it, which totally sucks by the way.

It also seems that I was stucked using totem-2.16(I can’t remember why) and in totem-2.18, the gentoo people made the default totem backend from xine to gstreamer. (even though xine is still a supported backend based on what I read on the totem website)

So, going through the internet for “possible” solutions, I finally ended up hacking the ebuild to suit _my_ needs.

 $ diff -Nau /usr/portage/media-video/totem/totem-2.20.1.ebuild /usr/local/portage/media-video/totem/totem-2.20.1.ebuild
--- /usr/portage/media-video/totem/totem-2.20.1.ebuild  2007-11-29 14:06:23.000000000 +0800
+++ /usr/local/portage/media-video/totem/totem-2.20.1.ebuild    2007-12-19 02:55:44.000000000 +0800
@@ -103,7 +103,7 @@
        # use global mozilla plugin dir
        G2CONF=“${G2CONF} MOZILLA_PLUGINDIR=/usr/$(get_libdir)/nsbrowser/plugins”

-       G2CONF=“${G2CONF} --disable-vala --disable-vanity --enable-gstreamer --with-dbus”
+       G2CONF=“${G2CONF} --disable-vala --disable-vanity --enable-xine --disable-gstreamer --with-dbus”

        if use gnome ; then
            G2CONF=“${G2CONF} --disable-gtk --enable-nautilus”

Note : This at least made it able to play rmvb files once again.

Note 2: You may need to mae a symlink to your win32codecs install location as xine defaults to searching in /usr/lib/codecs (which doesn’t exists in gentoo)

Previous 15

April 2009



RSS Atom
Powered by