PowerShell 3 Community Technology Preview 2

PowerShell architect Jeffrey Snover (@jsnover) announced availability of PowerShell v3 CTP 2 today in a twitter post. I installed the 64bit version. It required a reboot.

I have not yet have a chance to look into the goodies but right out of the bat I noticed two changes.

1) While my profile was loading, I got an error message, where PowerShell 3 complained about a line in my profile which goes something like this:

write-host "$mounted_drive: could not be found"

At C:\Users\Adil\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1:56 char:15
+         write-host "$mounted_drive: could not be found."
+                     ~~~~~~~~~~~~~~~
Invalid variable reference. ':' was not followed by a valid variable name character. Consider using ${} to delimit the name.
    + CategoryInfo          : ParserError: (:) [], ParseException
    + FullyQualifiedErrorId : InvalidVariableReferenceWithDrive

PowerShell v2 never complained about that line but I like the fact that PowerShell message is clear and suggests a way to fix it. Both of the following works

write-host "$mounted_drive : could not be found"   ## add a space
write-host "${mounted_drive}: could not be found" ## avoid ambiguity

2) PowerShell now auto-completes function names in my $profile when I hit "tab". From release notes, I see that some work went into "tabbing" and if you tab in the middle of a line, it does not delete the rest of the line anymore. This is a welcome usability change.


Using your own domain name with blogger

Using your own domain name instead of generic is a pretty easy change.

Login to and select the blog (in case you have more than one)
Click "Settings" > "Basic"
Scroll down a bit until you see "Publishing" section
Click "Add a custom domain"

If you already have a domain, you can use that:

  • Click "Switch to advanced settings"
  • Enter your domain name under Advanced settings
  • Click "Save"

If not, you can buy one through blogger as described here. Or the other option is to buy through Google Apps by following the wizard here. I chose to use Google Apps to buy the new domain. It takes about 10mins to get your new domain up and running. All DNS records are automatically pointed to Google's name servers.

After this was done, I went back to Blogger and and wanted to use my new domain name. I got an error:
"Another blog or Google Site is already using this address".

Reason is pretty straightforward, but solution is a bit tricky. I was getting this message because Google Sites are part of Google Apps and when you type "", you are redirected to Google Sites. In other words, DNS record for "www" is mapped to Google Sites address.

Problem is that Blogger needs to use this address to be able to serve your blog when someone requests your site. Uninstalling Google Sites should have fixed this:

  • Logon to Google Apps ( 
  • Click Settings on the top menu bar
  • Click Sites on the left 
  • Click "Uninstall Sites" (Do not do this yet, keep on reading)

This, however, did NOT fix the problem because to my surprise it seems that this process does not remove 'www' record.

Following is the correct procedure:

  • Logon to Google Apps ( 
  • Click "Settings" on the top menu bar
  • Click "Sites" on the left 
  • Click "Web Address Mapping"
  • Delete "www" mapping
  • Click "General" > "Uninstall Sites" (Optional)

If you, have already uninstalled "Google Sites", you can add it back:

  • "Dashboard" > "Service Settings" near the bottom > "Add more services

That way you can get access to the Sites menu to complete the procedure above. Once done, simply head over to and add your domain. That's all there is to it.


Installing Android 4.0 on Nexus S

The latest version of Android, Ice Cream Sandwich (ICS), is not officially available for Nexus S yet. However, there are some talented developers who managed to use available SDK and some dump data to port ICS to Nexus S. For this post, I used Drew Garen's Beta v10 port (no longer available - 11/30/2011. See updates at the bottom of the post).

As I explained in my "On Rooting Android" post, I spent the last few days to understand a bit more about inner-workings of Android and rooting landscape. As I now have a good backup of everything I have and I tested that I can restore everything back to that point, it is time to test ICS.

ICS Installation for Nexus S is quite simple but make sure you have read the previous article and have Pre-reqs as described there:
Now you are ready to install ICS:
  • Download the latest Beta from Drew's site
  • Rename the downloaded zip file to (Optional)
  • Connect Your android via USB in USB Mass Storage Mode and copy to the root.
  • Disconnect USB Storage Mode
  • Launch ROM Manager
  • Boot into Recovery Mode
  • Wipe Dalvik Cache in Advanced Mode
  • Wipe Data/Factory Reset
  • Wipe Cache
  • Select Install from zip file and point to /sdcard/
  • Watch it do its magic.

If all goes well, you should have ICS up and running on your Nexus S in 5 mins!

This is the new boot screen. It takes about the same time it used to boot. If there is an improvement, I did not notice it. In fact, it's great that this 'beta' version is able to match Stock 2.3.6 speeds.

Once boot process finishes, you will see a welcome screen. Simply Tap "Start" to kick off Google initialization process.

Then, you can allow or disallow Google location services. Tap "Next" to continue.
At his point , you can either sign in with your Google account or create a new one.

Once the wizard is done, you are going to notice that your apps will start coming down from Android Market. As far as I can tell most of the apps came down just fine. I was however missing the apps that I had installed from Amazon Market. This of course makes sense.

The only surprise I had was about the Google Authenticator App. It did not come down and I had to download and log into it by generating an Application-Specific Password in Two-factor Authentication page.

If you are using two-factor authentication, you might want to make sure your recovery methods for two-factor authentication are still valid before diving into this process.

I've been using ICS Beta Port for a day now and apart from known issues like GPS, the only issue I noticed was the battery killing Google+ app. See the screenshot.

It basically drained the battery to half in only 5hrs. I have not yet looked into what exactly in Google+  is causing this and have not seen any other reports from testers in the related xda-developer forum.

Update: 11/30/2011

After Google released ICS source code, devs at XDA forums started working on it and porting it to different phones. Koushing Dutta, who also owns other popular apps like ClockworkMod Recovery and ROM Manager, was one of the first who came up with a build for Nexus S.

There are several ICS ports right now and many other tweaked the works of rom developers or came up with mixes of those. There are also several kernel releases at this point all available from Nexus S XDA forums here (usually under Development).

Drew Garen has also used Koushik and other's work adding his own stuff on top and his work is available from his new (blogger) site.

By the way, a few times I got  "Random Offset {some number}" when flashing roms but apparently this is NOT an error but related to a new security feature "Address Space Layout Randomization" as mentioned here.


On Rooting Android

When I got my iPod Touch, I immediately started looking for methods to  root it, but never felt the need on my Android Nexus S, as I was already able to do pretty much anything including free-of-charge tethering (thank you T-mobile).

The newest Android version (v4 or Ice Cream Sandwich, a.k.a ICS),  has been announced a few weeks back and smart folks at xda forums have already managed to port the SDK version for my phone. Of course, phone needs to be rooted to flash the new rom.

Although you probably have lots of your data like your contacts backed up to Google Cloud, there is no way you can keep 'all' your data backed up at this point and rooting wipes out your device.

A couple of funny thing happened when I looked at rooting instructions. All of them tell you to make a full back up of your system, but you will be lucky if you find any instructions on what exactly to back up and how? Most of the tools mentioned want you to be rooted to begin with. It maybe possible to use Astro File Manager or 'adb pull' commands from Android SDK but the things you can do are limited 'before' you root. In fact, this is one of the reasons people root their phones because they would like to keep their 'data' when they buy a new phone (e.g. high scores in a game, or play lists in a music app...).

At this point, applications may write their data anywhere as there is no 'designated' location to keep app data and therefore there is no easy way to back that data up even if it was possible for a user to access /data folder, which is 'usually' where apps write. There is a feature request on this but as of now, no solution.

Anyway, the other funny bit is about an SDK tool named 'fastboot'. There are tons of material on web that tell you how to use it, problem is that latest SDKs do not have this tool. If you head over to Android SDK download page, you will notice that there is only a link to (i.e. revision 15). The last revision that had the 'fastboot' was r13 and there is no link to it.

If you are a developer, you probably know how to get older versions of SDK using SDK manager but mere mortals do not need to despair either! Here is what you can do:

Hover over the r15 link, you will notice that it is pointing to So, to download r13, simply replace 'r15' with 'r13' in the link and you should be able to download the r13 version. Once you download it, you can extract the fastboot.exe from 'tools' folder. In the current revision, Google has moved adb.exe from 'tools' folder to 'platform-tools' folder. You might want to put this one there too.

One more thing. You need to install USB drivers on your machine. When you download SDK, you will be able to get the Google USB drivers for your device. The catch is that it won't work when you are in 'fastboot' mode (at least for Nexus S) if your windows is 64bit. You will then need to install PDANet drivers so that your Nexus S is detected. You can find those links here. Good luck!

Btw, what do you get out of all this hassle? Here is a pretty good video from NexusHacks.

Update: 2011-11-06 - Root-ed

I finally found an easy to use hack to root my Nexus S without destroying/wiping any data (i.e. without unlocking the bootloader). I would like to emphasize this again, because I have read tons of so called 'guides' which seems to use 'rooting' and 'bootloader unlocking' interchangeably.

If you are like me, you may want to understand why do you need to do these things mentioned in the 'guides' instead of blindly following them. There is a lot of mumbo-jumbo to confuse the hell out of a regular user like myself. So, I had to look deeper into whole Android boot process and architecture to make sense of it. Hope this helps others as well.

Root - Super User
As in any other unix/linux variant, your purpose is to become the most powerful user with no restrictions on your Android. I.e. you want to become 'root' or 'super user'. That way you can install any application or even a totally new Android system (e.g. CyanogenMod). It's your device, your do whatever you like with it.

Well, you wish! Android will not just allow a regular user to become root. From a security perspective, you really would not want that anyway. Imagine any software messing with your device? Yep, that would be malware.

But you own the device and "you" want to become root! Well, you have two options:

1) You will need to find an exploit, as malware does. A hack, that elevates your privileges to become root. This is exactly what the 'zergRush Exploit' mentioned below does.

2) If there is no known hack, the other option is to go through steps in those rooting guides. Most of them will tell you that you will need to "unlock your boot loader".

Why unlock your boot loader?
Well, you are trying to become root in the Android Operating System but it does not allow you and boot loader is the software that comes "before" the (Android) Operating System. In other words, it's the initialization code that loads the (Android) OS and if you can mess with it, you can hack into that Android OS or replace it all together with a modified version perhaps.

Bootloader has two stages. The first stage of the bootloader (also referred as 'IPL' or "Initial Program Load") provides support for loading recovery images to the flash memory of the device.

If boot loader detects certain keypress (in Nexus S this would be Power button + Volume up), it goes in to a special mode called 'fastboot mode' where you can use 'recovery' option to flash a new (or old) image. From this point on you are on the second phase of the boot process. You may see acronyms like 'SPL', which means 'Secondary Program Loader' and refers to this second phase.

This is also why you usually see instructions for manually installing OTA (Over The Air) update files where you are told to drop the file in the root of sdcard, and then turn your device on while pressing 'special keys' for your phone and choose 'recovery' option. Upgrading your firmware is basically flashing a new (firmware) image.

However, we have a problem there. Usually, the boot-loader is "locked" so that it will load only recovery images that are signed by a certain authority. This might be Google or your wireless carrier.

So, if we can unlock the boot loader, then we can use a 'custom recovery image' like ClockworkMod Recovery, which allows us to install a 'custom firmware' like CyanogenMod (a.k.a CyanogenMod ROM). As these ROMs may include not only Android OS but IPL/SPL as well, there is a risk of making your phone unusable, commonly referred as 'bricking' the phone in case there is a bug in the IPL/SPL code.

Phones like my Nexus S are pure Android Devices. Wireless carrier does not install any customized software on it, it does not cripple any of the abilities and Google allows us to 'unlock' the bootloader by running a simple command:

"fastboot oem unlock"

I explained above how to get fastboot.exe. So you get that and other pre-reqs and then issue the command to unlock your boot-loader, which apparently voids your warranty and "WIPES YOUR DEVICE" including your SDCard.

In my case, I did not want that to happen without taking a full back up of the system, which was not really possible because I did not have root access. A bit of a chicken and egg problem...

Solution: Exploit to become Root
This method depends on DooMLoRD's Easy Rooting Toolkit v1.0, which is using what's called "zergRush Exploit".

The whole process took me less than a minute:


              Easy rooting toolkit (v1.0)

                   created by DooMLoRD

        using exploit zergRush (Revolutionary Team)

   Credits go to all those involved in making this possible!


 [*] This script will:

     (1) root ur device using zergRush exploit
     (2) install Busybox (1.18.4)
     (3) install SU files (3.0.5)

 [*] Before u begin:

     (1) make sure u have installed adb drivers for ur device
     (2) enable "USB DEBUGGING"
           from (Menu\Settings\Applications\Development)
     (3) enable "UNKNOWN SOURCES"
           from (Menu\Settings\Applications)
     (4) [OPTIONAL] increase screen timeout to 10 minutes
     (5) connect USB cable to PHONE and then connect to PC
     (6) skip "PC Companion Software" prompt on device



Press any key to continue . . .
--- STARTING ----
adb server is out of date.  killing...
* daemon started successfully *
--- cleaning
--- pushing zergRush"
3215 KB/s (23052 bytes in 0.007s)
--- correcting permissions
--- executing zergRush

[**] Zerg rush - Android 2.2/2.3 local root
[**] (C) 2011 Revolutionary. All rights reserved.

[**] Parts of code from Gingerbreak, (C) 2010-2011 The Android Exploid Crew.

[+] Found a GingerBread ! 0x00015118
[*] Scooting ...
[*] Sending 149 zerglings ...
[+] Zerglings found a way to enter ! 0x10
[+] Overseer found a path ! 0x000151e0
[*] Sending 149 zerglings ...
[+] Zerglings caused crash (good news): 0x40119cd4 0x0054
[*] Researching Metabolic Boost ...
[+] Speedlings on the go ! 0xafd255dd 0xafd3908f
[*] Popping 24 more zerglings
[*] Sending 173 zerglings ...

[+] Rush did it ! It's a GG, man !
[+] Killing ADB and restarting as root... enjoy!
if it gets stuck over here for a long time then try:
   disconnect usb cable and reconnect it
   toggle "USB DEBUGGING" (first disable it then enable it)
--- pushing busybox
4149 KB/s (1075144 bytes in 0.253s)
--- correcting permissions
--- remounting /system
--- copying busybox to /system/xbin/
2099+1 records in
2099+1 records out
1075144 bytes transferred in 0.097 secs (11083958 bytes/sec)
--- correcting ownership
--- correcting permissions
--- installing busybox
--- pushing SU binary
1276 KB/s (22228 bytes in 0.017s)
--- correcting ownership
--- correcting permissions
--- correcting symlinks
--- pushing Superuser app
4739 KB/s (762010 bytes in 0.157s)
--- cleaning
--- rebooting
Press any key to continue . . .
At the end of this, you get SuperUser v3.0.5(39) installed on your Nexus S. This exploit seems to be working with many other Android phones. There is a growing list in the forum linked above. It's also easy to go back if you want to unroot.

I launched SuperUser, clicked "Preferences" and tapped "Su binary v3.0" to update it to the latest version (3.0.3 as of now). I also set "Automatic Response" to "Allow". To test:

$ PS Z:\adil\scripts\powershell> adb shell

$ su


# whoami


whoami: unknown uid 0

This means I have root access on my Nexus S and my bootloader is still locked! What's next?

* Install Backup Software: Now that I have 'root' access, I can now install all these 'backup software' mentioned on all those rooting sites. I installed 'Titanium Backup'. Then I took a full back up of the system to my sdcard, and then mounted the phone via usb to take back up of everything on my sdcard to my hard drive.

* Install Rom Manager: This is to be able to 'flash custom ROMs (i.e. install customized Android versions). I installed 'Rom Manager' but have not done anything else yet.

One last thing tonight... Once I became root, I was able to get more information about my system and manually create backup images as shown below

$ adb shell
$ su
# cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00200000 00040000 "bootloader"
mtd1: 00140000 00040000 "misc"
mtd2: 00800000 00040000 "boot"
mtd3: 00800000 00040000 "recovery"
mtd4: 1d580000 00040000 "cache"
mtd5: 00d80000 00040000 "radio"
mtd6: 006c0000 00040000 "efs"

# cat /dev/mtd/mtd2 > /sdcard/mtd2.img ## Boot image
# cat /dev/mtd/mtd3 > /sdcard/mtd3.img ## Recovery image

Then connected phone to my pc using 'USB mass storage' mode and backed up these two images to my hard drive:

robocopy /mir l:\ z:\adil\Backup\Android\sdcard\

where l: refers to sdcard drive
z: is where in hard drive I backed it up to
and  /mir makes a mirror copy of everything in the sdcard (be careful with this option, if you use it incorrectly by specifying wrong target, you may wipe out the target).

I am, however, not sure if these will be enough to get things back. See the update below for proper Backup/Restore procedures.

Update 2011-11-07 More on Back up and Rom Management

Revenge of Stock ROM
Stock ROM is the original Android Image. Below, you will find how it tries to keep its integrity.

Today I wanted to use "ROM Manager" app to take a backup. To my surprise,
"ROM Manager" > "Backup Current ROM"
got me into a black screen with a yellow exclamation mark and an android icon underneath. Apparently, "ClockworkMode Recovery" (CWM) was overridden by Stock ROM after the reboot. Boot loader had detected that it was tempered with and had restored previous version.

Reinstalling ClockworkMod Recovery
This is pretty straight forward as I still have root access on my Nexus S.
  • Launch "ROM Manager"
  • Tap "Flash ClockworkMod Recovery
  • Select "Google Nexus S"
Making ClockworkMod Recovery Stick
Once done, you get a message that says "Successfully flashed ClockworkMod Recovery". This solution is temporary. One suggested solution is to rename the file that's causing this as follows:

$ adb shell   ## Use adb Android SDK tool to open a shell (see above)
$ su          ## Become root
# mv /system/etc/ /system/etc/  ## rename the file

failed on '/system/etc/' - Read-only file system

Unfortunately, you get an error back. The reason is that /system partition is mounted as read-only (ro) and before you can make any changes to files under it, you will need to mount it as read-write (rw).

First, we have to find out where the /system is mounted:

# mount |grep system

/dev/block/platform/s3c-sdhci.0/by-name/system /system ext4 ro,relatime,barrier=1,data=ordered 0 0

What does this mean:
  • First block is telling us about the actual directory under which /system partion will show its data
  • Second block is what we are mounting (/system)
  • Third block is the filesystem. This used to be yaffs2 but now we see it is 'ext4'
  • Fifth block are the options and what matters for us is the 'ro' parameter telling us mount is read-only
For further reading on Android partitions, take a look at this post.

With this knowledge we will use mount command to remount /system to the same location but this time with 'rw' parameter to be able to modify its content

# mount -o remount,rw -t ext4 /dev/block/platform/s3c-sdhci.0/by-name/system /system

Now we can go ahead and make the change.

# mv /system/etc/ /system/etc/
# ls -l /system/etc

-rw-r--r-- root     root        58357 2011-09-30 09:06 NOTICE.html.gz
-rw-r--r-- root     root       236823 2011-09-30 09:06 apns-conf.xml
drwxr-xr-x root     root              2010-11-24 16:42 bluetooth
-rw-r--r-- root     root          682 2010-11-24 16:42 contributors.css
-r--r----- bluetooth bluetooth      935 2010-11-24 16:42 dbus.conf
drwxr-xr-x root     root              2010-11-24 16:42 dhcpcd
-rw-r--r-- root     root        11865 2011-04-29 12:18 event-log-tags
-rw-r--r-- root     root          238 2010-11-24 16:42 gps.conf
-rw-r--r-- root     root           25 2010-11-24 16:42 hosts
-r-xr-x--- root     shell        1200 2010-11-24 16:42
-rw-r--r-- root     root         7696 2010-11-24 16:42 media_profiles.xml
drwxr-xr-x root     root              2011-09-30 09:06 permissions
drwxr-xr-x root     root              2010-11-24 16:42 ppp
-rw-r--r-- root     root          104 2010-11-24 16:42 secomxregistry
drwxr-xr-x root     root              2011-09-30 09:06 security
drwxr-xr-x root     root              2011-04-29 12:18 updatecmds
-rw-r--r-- root     root          531 2010-11-24 16:42 vold.fstab
drwxr-xr-x root     root              2010-11-24 16:42 wifi
-r-xr--r-- root     root          415 2008-08-01 08:00

Then, we should go back to Rom Manager and flash ClockworkMod Recovery one last time and it should stick around between reboots.

Using ClockworkMod Recovery for Backup

There is a long guide here explaining various options with screenshots but it's pretty basic.

1) Manual Backup

Select "ROM Manager" > "Reboot into Recovery" (for manual management). Phone will boot into ClockworkMod Recovery console.

Use "Volume down/up" buttons to move up or down and "Power" button to select an option.

As we wanted to take a full back up, we want to choose the option that says "backup and restore"

We then choose backup option and let the tool work its magic.

In my phone, the process took about 10 minutes. There is a progress bar that gives some visual feedback and when all is done you get "Backup complete!" message at the bottom.

2) Backup via ROM Manager

This is quite straightforward as it is an option in the "ROM Manager" application.

Select "ROM Manager" > "Backup Current ROM"

Enter a backup name, or tap "OK" to accept the suggested name.

Phone boots into recovery mode and starts the back up process.

After the backup is finished, phone boots back up.

Backed up files reside under /sdcard/clockworkmod/backup/{backup_name} folder. Below is the list of files after a backup

 183.4 m        .android_secure.vfat.tar
   8.0 m        boot.img
   12672        cache.yaffs2.img
 478.3 m        data.ext4.tar
     298        nandroid.md5
   8.0 m        recovery.img
 174.8 m        system.ext4.tar

Testing Restore from Backup

At the end of the day, all this effort is to be able to restore from a backup to get Android back to the original (Stock) state.

So, I booted into ClockworkMod Recovery mode and
wiped cache, 
wiped dalvik cache, 
wiped data/factory-reset...

After the reboot, Nexus S came up and kicked off Google's Welcome Wizard. I skipped it and there it was. My Nexus S as if I just bought it. The only difference was that SuperUser app was still there.

To restore everything back:

  • I installed ROM Manager from Market
  • Flashed ClockworkMod Recovery 
  • Tapped "Manage and Restore Backups"
  • Selected the latest backup (the one above)
  • Phone went into Recovery Mode and recovery started
  • After about 5 minutes, phone rebooted again
  • Android came up and everything was restored successfully as if I never wiped my phone! 
It was perfect. Well, too perfect in fact because apparently I had not renamed "/system/etc/"  before taking the latest backup. So, after the restore ClockworkMod Recovery was gone, but  it only takes a minute to get it back there.

Update: - 12/02/2011

Here is a very detailed, thoughtful article from security researcher Dan Rosenberg on "Rooting and Plagiarism". It helps put things into context.


Android Market App Update

The latest Android Market App v3.3.11 brings some nice features. The most important and contentious one is the 'auto-update' feature. It used to be that you could check the 'Auto Update' box for each application but with this new feature it is possible to set all apps to Auto Update.

Some people like it, others do not. Personally, I like to look at the feature set before updating apps, so I do not set any app to auto-update. That being said, if you are in the camp who likes to keep current no matter what choosing Wi-Fi only Auto Update seems like a no-brainer.

Right now, Market app is not showing up as an update but as noted by Android Police, it's possible to download the .apk (Android  Application Package) and install it manually, if you have the right tools.

Even if you are not a developer you might want to install Android SDK and take a look at the command line tools like "adb" (Android Debug Bridge), which you can also use to generate bug reports (after installing SDK, simply go to "Android-SDK\Platform Tools" folder" under Android folder and type 'adb' to get full command line options).

There are several links to where you can download the new Market app .apk that you can find by googling. After you install SDK and download the new Market app package, you can use the 'adb' tool to install it as shown below:

C:\Program Files (x86)\Android\android-sdk\platform-tools>adb install -r c:\users\adil\downloads\Market-3.3.11.apk
5428 KB/s (3296645 bytes in 0.593s)
        pkg: /data/local/tmp/Market-3.3.11.apk

Of course you will have to change C:\users\Adil\Downloads\Market-3.3.11.apk with the path to the .apk you downloaded.

It's important to use "-r" as you will already have Market app installed, and you would like to 'override' it. Help for the install command is as follows:

adb install [-l] [-r] [-s] <file> - push this package file to the device and install it
                               ('-l' means forward-lock the app)
                               ('-r' means reinstall the app, keeping its data)
                               ('-s' means install on SD card instead of internal storage)

This update also makes switching google accounts easier, if you have several (as I do). It's built right into the tool. Enjoy!


Microsoft PowerShell Forums Wiki

Over the last two years, I have collected a list of (rss) links to PowerShell related blogs in my Google Reader. These blogs and subscription to daily PowerShell tips from are helpful. I also check PowerShell newsgroup time to time, which used to be a very active group with hundreds of messages each month. 

When I wanted to check out what's been happening on the Usenetgroup today, I noticed that there was virtually no activity in the last few months. Looking at the messages from last busy month, I found out that Microsoft posted a notice telling people they were stopping NNTP support as they were seeing less usage of them and more activity in the (Microsoft) forums. Here is part of the explanation:

What is Happening?
This message is to inform you that Microsoft will soon begin discontinuing
newsgroups and transitioning users to Microsoft forums. 

As you may know, newsgroups have existed for many years now; however, the
traffic in the Microsoft newsgroups has been steadily decreasing for the
past several years while customers and participants are increasingly finding
solutions in the forums on Microsoft properties and third party sites.  This
move will unify the customer experience, centralize content, make it easier
for active contributors to retain their influence, mitigate redundancies and
make the content easier to find by customers and search engines through
improved indexing.  Additionally, forums offer a better user and spam
management platform that will improve customer satisfaction by encouraging a
healthy discussion in a clean community space.  To this end, Microsoft will
begin to progressively shift available resources to the forums technology
and discontinue support for newsgroups. 

Most people today are not really using NNTP clients but access such Usenet groups via Google groups. MS Forums have 'social' features and I am guessing that's one of the reasons Microsoft actually want people to use them. Whatever the reason, forums are the way forward if you would like to post (PowerShell) questions, although there is some effort to continue NNTP via CommunityBridge.

Lastly, while checking the forums, I came across a great resource: [Ultimate] PowerShell Wiki, called PowerShell Survival Guide (name reminded me Addon Survival Guides WoWInterface used to publish after each Warcraft upgrades). It's a pretty comprehensive with tons of links to other sites, learning materials and resources. Check them out!


Dynamic views from Blogger

Google's blogger team today announced a pretty cool feature that they call "Dynamic views". Announcement link is here. I am trying 'magazine' view right now. So far so good, none of the special formatting and template customizations I had done are broken. Way to go Blogger Team!


What's coming in Server 8

In his latest newsletter, Mark Minasi has a wonderful summary of features coming in Windows Server 8 from BUILD event that took place this week.

Windows 8 Dynamic Access Control (DAC) seems quite interesting and is a clear indication that Microsoft is trying to respond to an every day problem of 'permissions and auditing' in large enterprises. To be honest, I am not sure tagging is the answer to it, mainly because it's an attempt to use some of the unused attributes in AD and in file tags in NTFS, which may prove to be limited once enterprises starts being creative in employing the technology. However, the simple fact that it will be possible to use Regular Expressions on file ACLs is a welcome  news.

We will have to wait and see the implementation details. In the meantime here is an article published today at Windows IT Pro by Sean Deuby that explains the DAC in more details.

PowerShell is of course getting a bigger pie in server management (e.g. Active Directory Admin Center a.k.a. ADAC) with Version 3. Number of cmdlets are going from ~300 to 2300!

Speaking of AD, there does not seem to be much news other than making it Virtualization friendly. If you are still waiting for a SQL/Database driven directory, don't! It's not coming yet!


Writing Binary Data to Registry

Uh, oh! I found yet another post in drafts from 2007. I do not recall the events but posting it for common good :)


Yesterday, a friend from work showed me an interesting script he was working on. His script was reading a reg_binary type registry key, modifying its value and was 'attempting' to write it back to registry.

There was an issue with 'writing back to registry'. He was using SetBinaryValue method to write an array, which had modified values, back to registry but vbscript kept on complaining there was a "type mismatch" for this line:

Return = oReg.SetBinaryValue(HKEY_LOCAL_MACHINE, strKeyPath & "\" & subKey, strValueName, arrValues

if he set the arrValues to a static array like this


script worked without any issues.

I took the code and tried to figure out what was wrong with it. I would like to write down a couple of key points for those people who are trying to do something similar.

* When we are talking about Binary data in Registry, we are actually referring to Hexadecimal values, because that's the Registry-speak (1984 anyone?). We can use GetBinaryValue Method of WMI's StdRegProv class. Output is "an array of binary bytes"

* However, binary bytes (hex values) are not meaningful to us, so if we are reading it to, let's say, modify a value, we will probably want to convert it to string using CHR function, which returns a character associated with the specified ANSI character code. I.e. a decimal value between 0..127 (see ascii table).

Also, although registry speaks in hex as far as binary data is concerned, "SetBinaryValue" method does not understand Hex

Consider the following Reg Key/Value (pasting from exported .reg):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Adobe\Adobe Acrobat\7.0\FeatureLockDown\cDefaultLaunchURLPerms]"

sSchemePerms2"=hex:76,65,72,73,69,6f,6e,3a,31,7c,73,68,65,6c,6c,3a,33,7c,68,\ 63,70,3a,33,7c,6d,73,2d,68,65,6c,70,3a,33,7c,6d,73,2d,69,74,73,3a,33,7c,6d,\ 73,2d,69,74,73,73,3a,33,7c,69,74,73,3a,33,7c,6d,6b,3a,33,7c,6d,68,74,6d,6c,\ 3a,33,7c,68,65,6c,70,3a,33,7c,64,69,73,6b,3a,33,7c,61,66,70,3a,33,7c,64,69,\ 73,6b,73,3a,33,7c,74,65,6c,6e,65,74,3a,33,7c,73,73,68,3a,33,7c,6a,61,76,61,\ 73,63,72,69,70,74,3a,31,7c,76,62,73,63,72,69,70,74,3a,31,7c,61,63,72,6f,62,\ 61,74,3a,32,7c,6d,61,69,6c,74,6f,3a,32,7c,66,69,6c,65,3a,32,00

If we convert it to string, we get something like

vbscript:1acrobat:2mailto:2 file:2

Then, you will need to change
 with vbscript's REPLACE funtion

sNewValue = replace(sOldValue,"mailto:2","mailto:3", 1, -1 , 1)

* SetBinaryValue method is used to write "an array of binary data values" to registry. What is misleading here, and this was the key to solving our issue, is that method actually needs a variant or you will get type mismatch.

So this code works :

'Assumes objRegistry is a valid StdRegProv object.On Error Resume Next
Const HKEY_LOCAL_MACHINE As Long = &H80000002
Dim lRC As Long
Dim sPath As String
Dim uBinary() As Variant
sPath = "SOFTWARE\MyKey"
uBinary = Array(1,2,3,4,5,6,7,8)

lRC = objRegistry.SetBinaryValue(HKEY_LOCAL_MACHINE, sPath, "MyBinaryNamedValue", uBinaryData)

If (lRC = 0) And (Err.Number = 0)  Then   

  'Do something


  'An error occurred

End If 
* Pay attention to Array function, which returns a Variant containing an array as mentioned in MS documentation:

"A variable that is not declared as an array can still contain an array. Although a Variant variable containing an array is conceptually different from an array variable containing Variant elements, the array elements are accessed in the same way."

Ubuntu Update Manager fails to download packages

I am having an issue with my Ubuntu installation (11.04) where once laptop is suspended, it never wakes-up. I can repro this by simply putting it to sleep by hitting Fn+F4. The only solution I could find is pressing Power button for 5 secs to completely power it off.

I checked Bug reports at Although I see several people reported it, they see it on different hardware. So, I will file a bug report but wanted to make sure I have all the updates.

When I brought up "Update Manager", it showed me a couple of updates but when I click to install them I got an error:

"failed to download packages, check you internet connection".

Message may be a bit misleading as it suggest connectivity is the issue but in fact solution was simply clicking "Check" button to refresh list of available updates.


Fraudulent Certificates...Again

If you did not hear about the latest saga of "rogue certificates" out there because of a Dutch company called DigiNotar by now, well, you are not paying attention to the security news :)

Certificate Authorities are backbone of Trust system we use for "secure" online access. To see that "lock" icon in the browser when we visit a site with an SSL certificate, and even a green bar if the site has an Extended Validation SSL (EV SSL), may give us a sense of security, which unfortunately proving to be a "false sense of security" these days.

There are plenty of articles out there on what happened (even a Wiki), how it happened, who got involved and what Microsoft, Google, Mozilla, etc are doing to contain damage, even a Wiki about it but also what you should be aware of. Here is one from Windows Secrets that explains it in laymen terms. I personally liked the detailed account from Firefox folks.

If you are reading this blog, you are probably interested in an easier way to find whether you have a cert or not and PowerShell can come to the rescue:

PS C:\Users\Adil> gci certificate::LocalMachine\Root |?{$_.subject -match "DigiNotar"}

I do not have it on my machine, so I won't go further but you if you search only "Digi", you will see some results:

PS C:\Users\Adil> gci certificate::LocalMachine\Root |?{$_.subject -match "Digi"}

    Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\Root

Thumbprint                                Subject

----------                                -------

5557C0953FBD9F93745B214FB2483E9369B597F0  CN=DT Soft Ltd, OU=Digital ID Class 3 - Microsoft S
5FB7EE0633E259DBAD0C4C9AE6D38F1A61C7DC25  CN=DigiCert High Assurance EV Root CA, OU=www.digic
3E2BF7F2031B96F38CE6C4D8A85D3E2D58476A0F  CN=StartCom Certification Authority, OU=Secure Digi
0563B8630D62D75ABBC8AB1E4BDFB5A899B24D43  CN=DigiCert Assured ID Root CA,

Unfortunately, Safari / OS X does not have a mechanism to detect Revoke Lists (RL) but Apple should be releasing an update soon to fix the chain. In the mean time you can open up 'Keychain Access' tool and remove the DigiNotarRoot Certificate from GUI but where is the fun in that?

If you double click the certificate, you get detailed information as shown below.

And what if you had to do this on multiple Macs in an enterprise environment? You would want to use command line to do it. The command for all certificate related work is named 'security'.

You can dump a pretty list of all Root CAs in OS X using 'dump-keychain' parameter of 'security' command, which, as we mentioned above, is used to manipulate Keychains from command line.

If we only wanted to display the Friendly names of certificates, which would be equivalent of what we see in 'Keychain Access' GUI, we can filter by 'labl'

adil$ security dump-keychain "/System/Library/Keychains/SystemRootCertificates.keychain" |grep labl

    "labl"<blob>="Prefectural Association For JPKI"
    "labl"<blob>=" Certification Authority (2048)"
    "labl"<blob>="AOL Time Warner Root Certification Authority 1"
    "labl"<blob>="AOL Time Warner Root Certification Authority 2"

We can filter the results that start with 'D' and while at it, beautify it by getting rid of '=' and everything before it:

adil$ security dump-keychain "/System/Library/Keychains/SystemRootCertificates.keychain" |grep labl |awk -F '=' '{print $2}' |grep ^\"D

"DST Root CA X4"
"Deutsche Telekom Root CA 2"
"DigiCert Assured ID Root CA"
"DigiCert Global Root CA"
"DigiCert High Assurance EV Root CA"
"DigiNotar Root CA"
"DoD CLASS 3 Root CA"
"DoD Root CA 2"
"DST Root CA X3"

We can use also use find-certificate parameter to find the certificate and print all info. (-a for all keychains, not actually necessary here as we know this is a Root Certificate but good to be safe. If you happen to know the e-mail you could also use -e parameter)

adil$ security find-certificate -a -c "DigiNotar"/System/Library/Keychains/SystemRootCertificates.keychain
keychain: "/System/Library/Keychains/SystemRootCertificates.keychain"
class: 0x80001000
    "hpky"<blob>=0x8868BFE08E35C43B386B62F7283B8481C80CD74D  "\210h\277\340\2165\304;8kb\367(;\204\201\310\014\327M"
    "issu"<blob>=0x305F310B3009060355040613024E4C31123010060355040A1309444947494E4F544152311A301806035504031311444947494E4F54415220524F4F542043413120301E06092A864886F70D0109011611696E666F40646967696E6F7461722E6E6C  "0_1\0130\011\006\003U\004\006\023\002NL1\0220\020\006\003U\004\012\023\011DIGINOTAR1\0320\030\006\003U\004\003\023\021DIGINOTAR ROOT CA1 0\036\006\011*\206H\206\367\015\001\011\001\026\"
    "labl"<blob>="DigiNotar Root CA"
    "skid"<blob>=0x8868BFE08E35C43B386B62F7283B8481C80CD74D  "\210h\277\340\2165\304;8kb\367(;\204\201\310\014\327M"
    "snbr"<blob>=0x0C76DA9C910C4E2C9EFE15D058933C4C  "\014v\332\234\221\014N,\236\376\025\320X\223<L"
    "subj"<blob>=0x305F310B3009060355040613024E4C31123010060355040A1309444947494E4F544152311A301806035504031311444947494E4F54415220524F4F542043413120301E06092A864886F70D0109011611696E666F40646967696E6F7461722E6E6C  "0_1\0130\011\006\003U\004\006\023\002NL1\0220\020\006\003U\004\012\023\011DIGINOTAR1\0320\030\006\003U\004\003\023\021DIGINOTAR ROOT CA1 0\036\006\011*\206H\206\367\015\001\011\001\026\"

Well, enough playing. To delete the certificate, we will use 'delete-certificate' command. We have two choices:
1) Use -c parameter which is using 'common name'
2) Use SHA-1 fingerprint (safer).

Let's do both.

1) We need to use common name. This is the name you see in the GUI and we seem to get it from 'labl' line above. Command becomes

adil$ sudo security delete-certificate -c "DigiNotar Root CA" /System/Library/Keychains/SystemRootCertificates.keychain

2) As mentioned above, using SHA-1 fingerprint is less error-prone than relying on common names. To do that, we have to first locate the fingerprint. Noticed that it was not showing above when we displayed the certificate?

OK, so how do we get the fingerprint? Simple: we add -Z to the 'find-certificate' command which returns SHA-1 in the first line and then prints what we have seen above. So we will simply 'grep' the fingerprint:

adil$ security find-certificate -a -c "DigiNotar" -Z /System/Library/Keychains/SystemRootCertificates.keychain |grep SHA-1

SHA-1 hash: C060ED44CBD881BD0EF86C0BA287DDCF8167478C

And now we can get rid of the certificate:

adil$ sudo security delete-certificate -Z  C060ED44CBD881BD0EF86C0BA287DDCF8167478C /System/Library/Keychains/SystemRootCertificates.keychain

Well that's all. Now, all you would need is to put these two lines in a shell script (find fingerprint if certificate exists & delete it), then run it against all your Macs.

Note that in general Safari and Chrome honor system-wide certificates, however, some versions of Firefox is not using the Keychain to store/retrieve certificates. It has its own database and you might need to manipulate that as well. Also note that browsers have their own list of trusted CAs so updating Chrome and Firefox would solve the problem as of today.

Update: There are several articles about why revoking this certificate may not be enough because of the way EV SSL treated in Safari. If you delete the certificate, this should not be a concern. However, there seems to be additional certificates that needs to be revoked to be safer.

In any case, I visited DigiNotar web site on Safari and found a page with "Order" button that takes you to another page with SSL. As soon as I clicked the "Order" link, Safari warned me that the certificate on the site was invalid (expired).

As far as I can tell chain goes like this:
DigiNotar Root CA -> DigiNotar Services 1024 CA -> *

So this seems to be good sign. I tried some other sites but I am yet to find a site that was issued and SSL from the the Root CA I deleted.

Update2: I saw mentioned as a test site here.  This is what I got on Safari:

It's good that I am getting a notification. Unfortunately, it is not because DigiNotar Root CA is missing from my Root keychain but because certificate has expired.

Update3: Apple today released a security patch to put the issue at rest. In their terms, here is what they did:
Description: Fraudulent certificates were issued by multiple certificate authorities operated by DigiNotar. This issue is addressed by removing DigiNotar from the list of trusted root certificates, from the list of Extended Validation (EV) certificate authorities, and by configuring default system trust settings so that DigiNotar's certificates, including those issued by other authorities, are not trusted.

Of course, they would not tell exactly what they did.  I had a pretty good idea which file they were talking about. Let's look at the last one "configuring default system trust settings":

adil$ pwd

adil$ ls -l
total 1048
-rw-r--r--  1 root  wheel    5353 Sep  9 17:53 EVRoots.plist
-rw-r--r--  1 root  wheel  167848 Jul  9 23:39 SystemCACertificates.keychain
-rw-r--r--  1 root  wheel  395312 Sep  9 17:53 SystemRootCertificates.keychain
-rw-r--r--  1 root  wheel   86380 Sep  9 17:53 SystemTrustSettings.plist
-rw-r--r--  1 root  wheel  282984 Jul 28  2008 X509Anchors

So backed up these files before applying the patch and ran a diff. First let's look at a record in the last one.

The bottom is the original, and above you see the updated file. Basically they updated the date and added a new array with a dictionary which set kSecTrustSettingsResult to 3.

Notice that I am looking at the Key that starts with C060E... which is the SHA-1 fingerprint we got above.

Also notice that IssuerName, we know that's DigiNotar. As noted in some of the links above, there were several certs by DigiNotar published by others. I can tell from the diff which are the ones that were affected but I cannot tell a way of figuring them out only by looking at the original as Issuer Names would be different and don't have a list of chains where DigiNotar exists.


Find your Video Driver version with PowerShell

Well, I wrote this quite sometime ago (Bonus for geeks! Can you tell from the driver version?) but apparently forgot to post it:

Here is one way you can use WMI and PowerShell to get the version of driver you have installed for your video card(s).

PS C:\> gwmi win32_VideoController |select DeviceID,Name,DriverVersion |ft -a

DeviceID         Name                      DriverVersion
--------         ----                      -------------
VideoController1 ATI Radeon HD 5700 Series 8.812.0.0
VideoController2 ATI Radeon HD 5700 Series 8.812.0.0

gwmi is shorthand for get-wmi.
I happen to have two ATI cards. It's not really necessary to select Device ID and name. You can simplify it as follows:

PS C:\> (gwmi win32_VideoController)[0].DriverVersion

Why parentheses? Because that way you can access the properties of an object.

Why [0]? Well, b/c it's an array and you know you have the same card. So, it's enough to get driver version of the first card.

How did I know that I have to use Win32_VideoController WMI class? Well, I did not but there is no black magic here, just a bit of guess work and good ol' trial & error:

PS C:\> gwmi -list |?{$_ -match "video"}

   NameSpace: ROOT\cimv2

Name                                Methods              Properties
----                                -------              ----------
CIM_VideoBIOSElement                {}                   {BuildNumber, Caption, CodeSet, Description...}
CIM_VideoController                 {SetPowerState, R... {AcceleratorCapabilities, Availability, CapabilityDescripti...
CIM_PCVideoController               {SetPowerState, R... {AcceleratorCapabilities, Availability, CapabilityDescripti...
Win32_VideoController               {SetPowerState, R... {AcceleratorCapabilities, AdapterCompatibility, AdapterDACT...
CIM_VideoBIOSFeature                {}                   {Caption, CharacteristicDescriptions, Characteristics, Desc...
CIM_VideoBIOSFeatureVideoBIOSEle... {}                   {GroupComponent, PartComponent}
CIM_VideoSetting                    {}                   {Element, Setting}
Win32_VideoSettings                 {}                   {Element, Setting}
CIM_VideoControllerResolution       {}                   {Caption, Description, HorizontalResolution, MaxRefreshRate...
Win32_VideoConfiguration            {}                   {ActualColorResolution, AdapterChipType, AdapterCompatibili...

Guess which Python string find method is faster?

I came across a question on finding which of the two simple string find methods was faster. So, let's play a game. All we are trying to determine is whether a single character ('ch') passed to our function is lowercase or not. Can you guess which method will be fastest out of these four?

# check result of string find function

def is_lower1(ch):

    return (string.find(string.lowercase, ch) !=-1)

## compare the string char to lower case version of it

def is_lower2(ch):   

    return (ch.lower() == ch)

# check string char against all lowercase chars

def is_lower3(ch):

    return (ch in string.lowercase)

# check the char against the lowercase boundries

def is_lower4(ch):

    return 'a' <= ch <= 'z'

Clearly, you can guess the first one will be the sore loser. It is using a string function (string.find) on all the possible lowercase characters (string.lowercase) to check if the passed character matches one. 'Find' Function will return -1, if it cannot find the passed character, that's why result is compared against '-1'. OK, but how about the rest?

is_lower2 function is also using a string function (lower) to lower only the passed character and is then comparing it against its original value. So, basically there are two operations here, but no iteration as in find.

is_lower3 is using 'in' operator against all possible lower case values. So, our string operation here is to list all possible values with (string.lowercase). Is this faster than is_lower2?

is_lower4 is comparing the passed character against the boundaries of lower case letters. There are no iterations or string operations as before but two comparison operations. That should be fast, right? Note that we are using Ascii characters here for comparison. If you print string.lowercase, 'z' is not the last character, it's '\xff' which looks like 'y' with two dots over it on my PC, but be assured that results are not affected any noticeable way.

So, let's timeit :

if __name__ == '__main__':

    import string

    from timeit import Timer

    t = Timer("is_lower1('A')", "from __main__ import is_lower1")

    print "is_lower1 result: %f" % t.timeit()

    t = Timer("is_lower2('A')", "from __main__ import is_lower2")

    print "is_lower2 result: %f" % t.timeit()

    t = Timer("is_lower3('A')", "from __main__ import is_lower3")

    print "is_lower3 result: %f" % t.timeit()

    t = Timer("is_lower4('A')", "from __main__ import is_lower4")

    print "is_lower4 result: %f" % t.timeit()

You probably guessed it but here are results to prove our hunch on which string search method is faster:

is_lower1 result: 0.957694

is_lower2 result: 0.322355

is_lower3 result: 0.256491

is_lower4 result: 0.201267

Did you guess it right?


Fedora 15

It's been five years since I switched from Fedora to Ubuntu. I wanted to see where Fedora is these days, so downloaded and run Fedora 15 (F15) Live CD. A couple of first impression notes below...

I let the live CD boot and run F15. It comes with Gnome 3.0 and here is a link to Gnome 3.0 cheat sheet. I then chose to install it on disk using the link in "Activities" > "Applications".

I chose to partition manually, allowed 500MB for /Boot and 30GB for / as I wanted to use the rest for Ubuntu.

Installation was quick. Wizard is designed to warn on things like missing /swap partition, weak passwords etc. I also liked the fact that it was able to detect Time Zone correctly (in contrast to Macs usually defaulting to West Coast and requiring me to choose East Coast). Good job overall.

Software Update:
ISOs are not frequently updated but when I install Ubuntu it checks with its repos as soon as internet connectivity is established and almost immediately Software Update icon is displayed. I waited a bit expecting the same thing to happen with Fedora, it did not. So I ran "Software Update" and of course there were tons of updates available.  Lo and behold, I got a cryptic "Transaction error" message as soon as I clicked update:

"Transaction error could not add package update for fedora-release-rawhide-15-3(noarch)updates: fedora-release-rawhide-15-3.noarch"

I looked through the list of updates, found the one that read "Fedora release files | fedora-release-15-3 (noarch)" and unchecked it. That did the trick and all other updates installed without any issues. However, error was still there when I tried to update after a reboot!

So, I looked it up on Fedora forums and apparently there is a thread here. First message is from June, so this is issue has been around for at least two months but not yet fixed. As a workaround, you can drop to a terminal window and type the following:

sudo yum update

Sounds easy enough, but as some people pointed out, this is a terrible welcome message for a newcomer to the platform. People expect things to "just work" these days and are less likely to cut Fedora a slack.

Although, there was no network connectivity until I selected my wireless, Fedora had no issues remembering and connecting to my wifi network afterwards (see this Apple thread if you are wondering where that comment come from). Yet, I think from a usability perspective, I would want OS to ask me to choose a connection upon first login if it detects a wifi connection.

Speaking of network, "Nautilus" > "Browse Network" failed to detect my QNAP samba shares but I was able to click "Go" > "Location"  and access public shares by typing:


Power Management:
It looks like Power Management is a bit aggressive Out of Box as in OS X. If you do not use your machine for about 20secs, screen becomes darker. Fully charged laptop claimed it would drain in about 2hrs but I am yet to test how fast it discharges under my normal usage.


Ubuntu has live, warm colors (I like "Orange") out of box (OOB), Fedora has "grey" as its primary color. I think it's a bad choice as it fails to give a polished look when you log in for the first time. Yes, it's of course easy to change this and some Linux-fans loath eye-candy but first-impression matters.

Gnome 3.0 is a radical change from past. It's annoying for to save something into ~/Desktop folder only to find that it does not show up on the Desktop. I know the arguments against it but we will see if this heavy-handed approach will work (I am betting it will not as it creates confusion).

Also, there is just "Log Out" option when you click your name on top right (I can follow the logic), but that means you have to log-out first and then reboot/shut-down. Well, the option to power off is actually there but hidden, so you need to use "Alt" key. Alternatively, you can hit Alt+F2 and type

Simply typing the following would not work as you must be root.

You can hit Windows key to bring up OS X's spotlight-like search tool where you can type

and then
sudo shutdown

Tiring? Yeah! It does not really matter that much on a mobile platform as people would usually prefer to put the machine to sleep anyway, which may be one of the reasons why shutdown is not there, but seems counter-intuitive on a desktop platform.

One last 'annoyance' was the absence of 'minimize window' button. Well, to be clear, I am not even talking about what happens when you happen to click "Help > About" in Firefox, where you end up with a modal window which you can only get rid of by hitting 'escape' on keyboard, as there is no button to click but windows that have a button are missing minimize and maximize buttons.

By default, Windows only have "close"  (X) button.

It's not that difficult to add them using  "gconf-editor", which you must install via add/remove programs or simply by typing the following in terminal window:

sudo yum install gconf-editor

See the screenshot on the line you need to edit. Log out, log back in and you have the minimize, maximize buttons. You can even shift them from right to the left like Ubuntu by changing the location of ":" like the following:


Well, these are just a couple of  my first-impression notes. Ars Technica also has two good articles to read "Fedora - first Look" and "Gnome review". There is a lot of talk about the new systemd, and I am looking forward to checking it out.


Your personal domain with Google Apps

So, you finally decided to own your domain name. Who can blame you for that? sounds way cooler than a generic, right? ;-)

So, what are the options?

Well,  I guess the answer depends on what you want to do with it.  There are many services out there that let you register your domain. is probably one of the most famous one. I used their service and it got better over the time. If, for example,  all you want is that people reach to your blog when they type your new domain name, almost all registrars do that free of charge.

But, then what? I will tell you what I like to do with the domain names I register for personal use. Well, first thing first, I love Google services.

  • I would like to have an e-mail service for my new domain, that's managed like gmail. 
  • I would like to create accounts for my family and sometimes friends as well and I would like them to have a common set of services (like calendar, storage area, contacts etc.)

Enter "Google Apps". Google is not a domain registrar per se but they have a partnership with one and you can easily get your domain name registered and Google Apps domain created for $10/yr. Once you do that, Google automatically adjusts all the necessary DNS settings for you and if you would like to manually edit anything, you can easily do that from within Google Apps dashboard. Simple!

One added benefit is that, your personal information is hidden from WHOIS directory. Normally, you get charged extra for that. This is especially useful, if you would like to have a private domain that you only use for certain (private) activities.

For example, you could buy something like "" and configure an e-mail like "". Then, use only that e-mail when your finances are involved (bank accounts, e-bay, amazon etc.).

You might also use Google docs from that domain only to keep private stuff and use your, say, gmail account for everything else. This might also help reduce the attack surface if someone is trying to steal your known account.

If you, for example, had a weak password recovery option with your gmail and someone guessed it, they still would not know about your secret domain where you keep the important stuff, which might reduce the damage... It's a wild world out there, you can never be careful enough!

Update: Added the pic on top (originally posted by Tom Anderson)


Should you change your DNS?

A couple of days ago, I was talking to a friend who is running a small office in NY. He complained about how they were frequently having issues accessing web sites, the sluggishness and the inconsistencies they were experiencing.

The more we talked, the more it sounded like taking a look at their DNS servers was in order. They did not, however, have a dedicated DNS, and were using whatever DNS servers their ISP was assigning to them.

People do not need to use DNS servers their ISPs assigned to them. There are many public name servers that can be used instead. Google's Public DNS is probably the most famous one and I fully recommend it. Open DNS is another one that has been out there for a while.

Not only using public DNS may improve the speed of your browsing but it may also help you get a little extra security.

I recently came across a little (literally - just 163KB) utility called DNS Benchmark by Steve Gibson which can tell you which name servers would be the fastest for you.

As I guessed, Google's Name Servers, which I have been using for quite some time now were the fastest for me. In my case, it was not the name resolution speed that pushed me to switch to Public DNS. I hated it when my ISP intercepted mistyped domain names. If you do that, you should normally get a "404 - Page not found" error. Instead, you notice that the url you typed is put into a search web site branded by your ISP and results are shown to you.

ISPs are not really doing this out of goodwill to help you, they get money out of it. They get paid for the keywords and appropriate results show up near the top. You can read about that whole story and how that is being used for Phishing attacks here.

Once you decide to use a public DNS, I guess the easiest way to implement this would be to enter the IP addresses of the Public DNS in your Router and let it distribute them via DHCP. It's pretty easy to do.

Here is the screenshot from my LinkSys (Cisco) E3000 router.

In most cases, you can reach your router's set up by typing into your browser and logging into it. The settings you see above are located under "Setup" >  "Basic Setup".

By the way, if you are using Google Chrome, it is up to you to choose whether Google should display you some suggestions when you mistype a url or a domain name. Here are instructions to turn on or off this option. Good luck!


Ubuntu Unity

I love Ubuntu but I had a hard time selling Ubuntu Unity today...

I've installed every version of Ubuntu released in the last four years and enjoyed it getting more and more user-friendly. I was so comfortable with it that I removed windows XP from my father-in-law's laptop and replaced it with Ubuntu about a year ago. I had to spent a couple of hours with him to set him up  but that was all to it. He has been using it happily since then.. And I have the piece of mind as I am no longer worried he will be getting malware / viruses on his laptop which is connected to my home network.

At the end of April, Ubuntu released 11.04. I upgraded my laptop but left his laptop alone. Ubuntu 11.04 release brought a radical GUI change named Ubuntu-Unity. Personally, I did not find it too difficult to use although it did not seem to me that it made things any easier than they used to be.

Today, we upgraded my father-in-law's laptop to Ubuntu 11.04 as well. I let him use it for a while and it's been absolute nightmare for him. He hated the new interface, it was way too confusing for him. One of the reason for the new interface was to simplify the interface and make it easy to find things. Unfortunately, his experience was exact opposite. So, I rolled him back to Classic Ubuntu (Gnome) interface. You can find detailed instructions here to roll back.

"The new, highly simplified desktop interface “borrowed consciously” from “other successful platforms,” including Windows and Mac OS X, Shuttleworth said." according to

So, I will have him try OS X to see if he is going to have the same challenges. If he can use OS X just fine, then maybe these borrowed ideas were not implemented well enough in Unity.

Update [08/01/2011]: He found OS X easier to use than Unity.



Simplee is a new service that is worth checking out. If you are using for your finances, you can think of Simplee as the Mint of your healthcare spending.

Set up is easy. You have to provide your credentials to log into your healthcare provider's web, and then rest is a well designed page that you can see an overview of your healthcare situation and can drill down as necessary.

It was quite astonishing to see how high the charges are and how you are shielded (or not) from such costs.

There is some information that I am not able to explain as it claims that I owe some money because of some visits but in reality my healthcare provider covered those. Not sure if it is some kind of mistake or just a misunderstanding on my part. They have a Twitter account where you can post questions as well.

Oh, here is the LifeHacker article on it.


Passwords - 2

It's been two years since I posted an entry about Passwords and highlighted an issue where even a highly respected company like Amex would only allow you to create weak passwords.

Chase too has some 'interesting' limitations on what I can use in passwords. The reason I am highlighting Amex is that their version is extreme + I love Amex! My experience with their customer service has always been quite positive. Anyway, back to subject...

Two years ago, below were the rules under which American Express 'allowed' you to create a password:

Your Password should:
* Contain 6 to 8 characters - at least one letter and one number (not case sensitive)
* Contain no spaces or special characters (e.g., &, >, *, $, @)
* Be different from your User ID and your last Password

Two years later rules have changed for better, but not by much:

Your Password:
* Must be different from your User ID
* Must contain 8 to 20 characters, including one letter and number
* May include the following characters: %,&, _, ?, #, =, -
* Your new password cannot have any spaces and will not be case sensitive.

Why on earth Amex would still insist that their customers cannot create CaSe SenSitiVe passwords is beyond me. It's a well known 'good-practice' to mix Upper and Lower case letters in passwords. There is no way security team in Amex does not know about this. So, why not allow it???

I asked them in Twitter to find out. Well, as you can see from exchange below. They won't say why..

While on subject, Steve Gibson has a fun page titled Password Haystack. Worth taking a look.