Postgresql 8.4 -> Where are On Disk Bitmap Indexes?

Postgresql 8.4 is nearly out. There's quite a few things which looks interesting to me. However, the one thing which I'm still missing and am not able to find the status of is where or what happened to the On-Disk-Bitmap-Indexes which was supposed to come out for the 8.4 release.

Anyone from the Postgreql SQL Team would be privy to that info? can't really seem to find it on google.

Thanks.
  • Current Mood
    aggravated aggravated
  • Tags
    ,

FC10 VPN Setup to Windows PPTP using Network Manager

I was going through this for a couple of hours as I was trying to configure FC10 (moved from gentoo [for now]) and NetworkManager to connect to my office's VPN server running on Windows(R) ISA server.

I tried a variety of methods and configurations but I keep getting errors of the sort


Dec 26 15:02:00 localhost pppd[5483]: Plugin /usr/lib/pppd/2.4.4/nm-pptp-pppd-plugin.so loaded.
Dec 26 15:02:00 localhost pppd[5483]: pppd 2.4.4 started by root, uid 0
Dec 26 15:02:00 localhost pptp[5484]: nm-pptp-service-5480 log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Dec 26 15:02:03 localhost pptp[5493]: nm-pptp-service-5480 warn[ctrlp_disp:pptp_ctrl.c:956]: Non-zero Async Control Character Maps are not supported!
Dec 26 15:02:09 localhost pppd[5483]: MS-CHAP authentication failed: E=691 Authentication failure
Dec 26 15:02:09 localhost pppd[5483]: CHAP authentication failed
Dec 26 15:02:10 localhost pppd[5483]: Connection terminated.


Authentication errors, but the issue was WHY?

Ends up, and this may not apply to you, but it definitely applies to me.

My office is using Windows ISA server (200x version I would think)

So, in Network Manager, there are 3 option boxes

Username
Password
Domain

so, I happily added

Username : lotso
Password : lotso's password
domain : lotso's windows domain

ended up I get those errors above.!!

......
.....
2 hours later and MUCH googling
.....
....

I tried
Username : lotso's windows domain\lotso
password : lotso's password
domain :

and it WORKED!!


Ideosyncracies!!

on the other hand, this is _not_ a bug w/ NM or PPTP, seems like per digitalwound, his one works fine w/ those 3 options

python-2.4 to python 2.5 upgrade hell

It's been a while since the last post.

Not much been happening except that I'm missing out on life due to work commitments which seriously sucks.

In any case, been battling with the python upgrade on my gentoo box at home.

I've been having undefined symbol issues with pygtk and pygobject all through last week and I just solved my issue like 10 min ago and thus I can go to sleep already.

Main issue is the borking when I was compiling a python app which needed gtk support. Only thing, it, it also needed "threads" support which, for some reason, is not default turned on in Python2.5 but is on python2.4!

Hence, I spent the last week pulling out hairs trying all different permutations of WHY did it USED to work and doesn't now.

it ended up as a simple USE flag which was not checked during the ebuild checks.

Why.. Oh.. Why...

Automatic Raid Array Rebuilding

Hi guys, long time no post. Last post was at March and it's now already June.

Been busy as usual, however, not been dabbling as much as I "should" as I've been busy with other NON-FOSS related stuffs. (psst: I'm now heavily into photography. Went to shoot some Japan GT queens!! Kawaaiii)

Anyway, since this is a (nearly) purely an FOSS based blog, I'm gonna talk about my automatic Raid Rebuilding script.

You see, what happens is this, my postgresql box, (celeron 2x500GB in Raid 1) has a tendency to keep dieing once in a while for X reasons. (I have till now, been unable to locate the reason why it's dieing so often) I've tried to the write-all, read-all using dd but thus far, has not seen errors being thrown out. So, it's been a manual instance of...

go to work. see the email : Your raid has Died!
log onto the box, do the rebuild.

After a while, this just becomes tiring and I decided to fsck it and make it automatic.

Here's the script

#!/bin/bash

FAIL_DRV=`mdadm --detail /dev/md0 | grep faulty | awk '{print $6}'`

if [ -n "$FAIL_DRV" ]
then
  echo "Detected degraded array : $FAIL_DRV"
  echo "Starting automated array rebuild process"
  mdadm /dev/md0 --fail $FAIL_DRV --remove $FAIL_DRV --add $FAIL_DRV
else
  echo "Nothing to do"
fi


Simple eh..

So, now I don't have to come to work to see it all wonky because it'll automatically rebuild itself.

Some of you may ask, how come I don't just replace the drive? Because I can't find any replacement drive which is a PATA connection and at 500GB capacity! The largest I can find are 160GB.

Bummer

GVFS makes me happy

Did you know that nautilus is now integrated with the new GVFS (gentooGnome virtual filesystem) from the older gnomeVFS module?

The new one is partially built on top of fuse, or rather integrates with fuse and it makes mounting and accessing files from network shares a much better experience than it was previously.

GVFS, with nautilus, when you browse to a share, it’s automatically mounted under ~/.gvfs

gvfs-fuse-daemon on /home/gentoo/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=gentoo)

~/.gvfs $ ls -al
total 12
dr-x------   3 gentoo users     0 Apr  5 03:38 .
drwx------ 118 gentoo users 12288 Apr  5 04:37 ..
drwx------   1 gentoo users     0 Feb  1 21:20 mediacenter on 192.168.10.111


and thus, you can actually use applications to access those items under those mount points.

for example, one of the reasons I rely heavily on totem and nautilus is because of its SMB support. Whereby I can connect to my home mediacenter server and stream videos from it without going through the motions of actually mounting the drive/shares.

(did you know that only totem and nautilus share this feature in gnome and the rest of the programs are brain-dead in this regard?)

Kinks :
Currently, based on my limited testing (it’s 4+am as I write this as I was playing with GVFS+Nautilus 2.22), there are bugs when you try to access a smb share which is password protected. For X reasons, the usual nautilus “password verification” box does not come up.

And putting smb://user:password@server/share does not work either.

But if you were to drop down to the CLI and do a



$ gvfs-mount smb://gentoo@192.168.10.2/storagePassword required for share storage on 192.168.10.2
Domain [HOME.NET]:
Password:


then that would work and you’ll get the mountpoint in ~/.gvfs and you can access files from that location.
It’s an additional step, but hey, at least now it’s

1. transparent (to an extent) and most apps can see it (tried mplayer/gmplayer/xine)
2. FAST. It’s pretty much faster than the previous incarnation of using smb protocol through nautilus. (I don’t have any real stats)
3. Have a tendency to crash.

Steps to secure your site

So, in today’s lesson I will elaborate on how 1 site decides to put additional “protection” towards phishing or in a more general term, how to secure your site against malware or other badwares.

1. Open an account with RHBBank (rhbbank.com.my)
2. Subscribe to internet banking
3. Go Overseas
4. Attempt to pay your credit card fees etc via internet
5. Pull hairs in attempts

So basically, I’ve been trying to access to RHBbank’s secure site (https://logon.rhbbank.com.my/) and keep getting either permission denied or server errors or something along those lines.

So, in an off-hunch, I tunnelled to my home squid proxy server and used that as the proxy for firefox. I fired up the browser and was greeted with the RHB secure page!!

Open up opera, (normal settings) and fire up the same page and “internal server error”

So, either one of two things is happening.

1. RHB is looking at IP addresses and denying access to anyone out of M’sia IP address range
2. My Company’s outgoing filter regards RHBbank as malware etc and prohibits me to visit it.

funny business.
  • Current Mood
    aggravated aggravated
  • Tags

Location..Location..Location

I’m in San Jose. Still pondering if I can make it to the Local PUG (postgresql user group) meeting to be held on Feb 12 since I’m here.

Will get the chance to meet David Fetter and team.

I’ll see what happens.

PS : I freaking hate it here this time of year. It’s cold and so are my fingers! I need to constantly rub my hands together

You say Lemon, I say Lemonade (A story)

The past few weeks was not all that great as in addition to facing additional challenges at my primary day job, I also had to deal with my pet project in my day job to help smoothen my day job’s activities.

Some of you may know that my pet project involves pulling gobs of data into a PG instance to make my own version of a company datamart. I’m not talking about small gobs of data, but more towards in the range of 200+GB (It was more, but in one of the efforts to control/tune the server, I deleted close to 2-3 month’s worth of data.)

200+GB may not seem like much to you guys who gets to play with some real iron hardware or some “real” server hardware. All I had was just a Celeron 1.7G w/ 768MB of ram and some Gobs of IDE 7200 RPM drives. In short, all I had was lemons and I needed to make the best of it!

Actually, all was working fine and dandy up until I decided to make a slave server using Slony-I + PGpool and while that was a good decision, the involved hardware was the same if not worst(512MB ram only). When I started to implement that, I was faced with 2 issues.

1. Replication would lag behind by up to a day or so waiting for the next sync (dreaded fetch 100 from log) was taking to long.
2. My nightly vacuum job went from an average of 4+ hours to like 27+ hours.

So, in a effort to get things under control, I went through a few paths and hit more than my share of stumbling blocks. One of the things which I tried was to reduce the amount of “current” data in a particular table from 1 month -> 2 weeks -> 1 week (and move them into a so-called archive table but still in the same tablespace). This didn’t really bode well, as I initially tried to move the data in like 3 hourly chunks, which failed and to 1 hour chunks and then finally to 15 minutes chunks.

But in the end, it was all really futile because what i was essentially doing was just generate more and more IO activity (and that’s not a good thing). In addition to that, I also had to deal with vacuuming the tables due to PG’s MVCC feature and that was also not fun.

So, in the end, I broke my 3x500GB Raid 1 mirror (1 spare disk) and used the spare as the Slony-I log partition. Initially, that wasn’t all I did, I also included the 2 main problematic table, moving it from the main raid1 tablespace into that 1 disk tablespace. (that was also a mistake) and it didn’t help at all. IO activity was still high and I wasn’t able to solve my vacuuming process as wel.

Time for another plan.

This time around, what i did was to move the 2 big tables back into the raid1 tablespace and left the slony logs in the single disk. In addition to that, I also made a few alterations to the manner in which I pull data from the main MSSQL database and the way it was inserted into PG.

This time around, I’m utilising partitioning and some additional pgagent rules to automatically switch into a new table every 7 days and in doing so, I also had to change a few more other items to get things to work smoothly. I did this last Friday and based on the emailed logs, I think I’ve made a good decision as right now, everything seems peachy with the vacuum back to ~4 hours and there’s also no lag in the Slony replication.

I still hav another thing to do which is to alter the script I use to pull from the main Db as I’m being kicked (requested) to pull from an alternate DB which has a slightly different architecture.

2 disk Raid1 is definitely MUCH better than a single disk tablespace. With the amount of read/write activity that i have, it’s just not doable.

So, that’s how I made lemonade with my lemons. (hmm.. does this sound right?)