Welcome to JeffVan.com

My name is Jeff VanNieulande and I'm an IT Professional from the Metro Detroit area. more...

Shrinking Disks on VMWare ESXi

June 5th, 2014 | No Comments »

I recently was looking to free up some unused resources on our VMWare ESXi 5.1 environment. This is a good thing to do once and awhile to make sure you’re not over-allocating resources to certain VMs that don’t need it. In looking through the different servers, I found two old Windows Server 2003 VMs that had 100GB more disk space than they would ever need. I wanted to take that 200GB and add it to another server who’s data drive was filling up.

Shrinking the disks

Before you work with any disks in VMWare, it’s important to do a few things. First things first, backup your data. Secondly, I like to clean up the disk a bit – defrag and run chkdsk. I originally did this in order to use gparted (see What Didn’t Work below), but I noticed that the disks seemed to copy much faster once I did this. I had a 6 hour estimated time at first, but after cleaning it up it went down to 2 hours. Finally, make sure you don’t have any snapshots on the VM!

The best way I found to shrink the disks was to use the free VMWare provided utility called VMWare vCenter Converter Standalone. You may have used this before to do a Physical to Virtual migration, to convert Hyper-V to VMWare, etc. It’s a pretty handy tool. In this case, I’m using it to copy a VM to a new VM in the same VMWare environment. The part of this tool we’re most interested in is configuring the hardware on the new copy. Through the wizard, there’s a point to edit the settings of the new machine – here you can shrink the disk. It won’t let you go lower than the used space – in my case 30GB. I took 100GB off to make the disk 100GB.

I powered off the machine, let the converter run, and it finished in a couple of hours. I powered on the new machine, and voilà, I had a 70GB drive! After extensive testing of the new copy, it was time to get rid of the old machine.

Running into problems

Confident in my newly shrunk VM, and confident in my backups, it was time to delete the old machine and reclaim my disk space. Removing the machine failed. I went into the Datastore Browse. I could delete most files, but the .vmdk file – the one that was taking up all my space – would not delete. It seemed like a lock was on it, but there were no other VMs using this disk. I was even able to rename it with on trouble.

I then SSHed into one of my hosts. I browsed the datastore (typically /vmfs/volumes/datastorename/vmname) and typed rm machine-name.vmdk. It failed with the error: rm: cannot remove ‘.vmdk’: Resource temporarily unavailable. 

What was locking my file? I triple checked each VM and nothing was mounting this drive. This data was orphaned and there was no reason it shouldn’t be deleted. I decided to check the logs. I typed: tail /var/log/vmkernel.log

This is what I got:

2014-05-30T00:46:50.853Z cpu1:40376)DLX: 4185: vol ‘SANDatastore’, lock at 1199107429376: [Req mode: 1] Not free:
2014-05-30T00:46:50.853Z cpu1:40376)[type 10c00002 offset 1199107429376 v 34, hb offset 3809280
gen 63, mode 1, owner 5273d58c-7b5992ec-7389-90b11c0431da mtime 30161
num 0 gblnum 0 gblgen 0 gblbrk 0]

I bolded owner and a MAC address above. Now we have a clue as to what was holding the vmdk captive. I threw the MAC address into a MAC Address Lookup Tool to see what company it’s registered to. It came back Dell Inc. – which meant it was probably one of my hosts.
I then looked for the matching MAC on each of our hosts (select the host > Configuration tab > Network Adapters). Went through them all and finally found the culprit. I still don’t know why this host had a lock on that file, since it was in a different cluster entirely, but it did. I SSHed into that host, changed to the folder, and deleted the vmdk. Success! I went into the Datastores section of vSphere and refreshed – my extra 100GB were freed! I then repeated the whole process for the second VM I wanted to shrink.

Expanding the disk

Luckily this part is much easier. Especially since the VM I was adding the space to was running Windows Server 2008. All you have to do is go into the Properties of the VM, and add space onto your disk. Then open up Disk Management in Windows, run “Rescan Disks” and expand your partition. Best part, you can do this while the machine is live.

What didn’t work

Originally I thought I could use the gparted live CD to resize the partition. I had recently done this in Hyper-V with no issues. While I was able to resize it, The disk still showed it’s full size in VMWare (which makes sense.) I then tried to create a second disk on the machine at the smaller size and use Clonezilla to clone the first disk to this second one. Since Clonezilla doesn’t support (to my knowledge) cloning a large disk to a smaller one, I tried just cloning the partition. As expected, the second drive did not boot. This may still be viable by tweaking the Clonezilla settings, using linux utils or a cloning tool like Acronis. By this point though, I went with the VMWare Converter which was much easier and smoother.

Thanks to De’Nick’nise for the image.

Google Reader is Shutdown – What Does This Mean for Webapps and The Cloud?

March 15th, 2013 | No Comments »
Much has been said in the last couple days about Google’s recent decision to shut down their online RSS reader application “Google Reader”. While I’m lamenting the loss as a heavy Google Reader user myself, I think the bigger loss was my trust in web apps and the “cloud”. Up until now, when we saw webapps shut down it was usually due to their failure – Google has shut down a lot of services – Wave, Gears, Notebook… the list is actually quite large. But Google Reader was different – although Google apparently saw it as a failure, it was by far the dominant application in it’s category of RSS Readers. Google Reader was the 800 lb. gorilla that managed to stomp out a lot of the competition, and now its users are struggling to find alternatives. (Many of the other RSS readers websites I tried to check yesterday were down, unable to handle the massive onslaught of abandoned Reader users looking for a new home.)

Luckily Google is allowing us to export all of our data. This is still raising a lot of questions for me, as someone who has a lot of data in various Google services. If you or your business relies heavily on any webapps or cloud storage as well, you need to ask yourself these questions too.

What happens when the app has an outage?

When your application or data is stored in the cloud, outages are a real possibility. What are your backup plans if the service is unavailable? How does the webapp company try to prevent this? For instance, do they have several datacenters in many locations? (Also – what if you have a network outage? Do you have offline copies of your data?)

What if it’s shut down for good?

What is the warning time before the shutdown? (Google Reader’s is under 5 months.) Will it be handed over to another company? Open-sourced? Has the company shut down other services, and how have they been handled in the past?

What if there’s feature loss?

A lot of the companies I’ve worked for use old software that’s no longer supported, or they haven’t upgraded to the latest version for various, completely valid reasons. With a webapp, you don’t have this choice. What happens when the application changes? What if a feature is dropped? (A great example of this are iOS apps, which don’t give you an option to roll back to a previous version. There’s been countless examples of redesigns, features being dropped and major bugs being introduced that cripple the app. Just this past week Chrome for iOS had a crippling start-up crash that rendered the app useless for many people until an update was pushed out.)

Is your data easily exportable in an open format?

How do you back up your data? Can it easily be downloaded into a format that can be read by other applications? What are these other applications, and have you tested that your data can be moved to them? Can your data be backed up locally on a regular basis and in an automated way?

To be clear, I’m not against web applications or storing data in the cloud. My next RSS Reader will no doubt be another webapp – I’m willing to put up with downtime, potential loss of data and outright shutdowns because of the ease of use and being able to access it anywhere. My RSS feeds aren’t that important in the scheme of things. However, I’m a lot more leery now, and if I ran a business dependent on any webapps or cloud storage, I’d take a long hard look at the worst case scenarios.

Curbing My Smartphone Addiction

September 27th, 2012 | 7 Comments »

The evolution of personal technology in the last few years is really an incredible thing. I bought my first cell phone in 2000, and I was floored that I was able to carry a phone around with me at all times. Now 12 years later, most of us carry not only a simple telephone in our pockets, but one that’s become an increasingly powerful computer.

Of course having such a phone is incredibly useful, but I started to find it to be somewhat of a burden as well. If you haven’t experienced the distraction a smartphone brings, I’m sure you’ve at least seen it – people whipping out their phones constantly to check on who-knows-what. Smartphones have taken over the idle moments of our lives – standing in line, waiting for friends – even waiting to cross the street. I kept becoming more and more guilty of this, and when I found myself checking Instagram or Facebook at stop lights I knew I had a problem. I tried to cut down on it, but it had became a mindless habit – when life stopped I automatically pulled out the iPhone.

A few months ago I had installed a Jailbreak app called “App Analytics”. It does one simple thing – lists all your apps with the time you’ve spent in them. It was eye opening to say the least. Some applications had racked up hours upon hours of use – and I knew many of those were in small, 30 second increments. I had to cut down.

I started thinking of ways to break my habit. I could just uninstall some apps, but I didn’t want to go that far. I honestly enjoy browsing Twitter or checking up on news – I just wanted to cut back. I thought about maybe finding a way to block out apps during certain times of the day, but that really didn’t fit my needs either.

I finally decided on password protecting the offending apps. There are several Jailbreak tweaks that do this, but I settled on Lockdown Pro because it seemed to have the most features (I’m sure Android users have similar apps as well.) I picked a long, complicated password that would be a pain to type in, and started locking all of those quick information-fix apps that waste my time so much.

So has it been working out? I’ve been doing this for about 2 weeks now, and I have to say I’m very happy with this strategy. Having to put in a long password definitely breaks me out of the mindless pull-out-my-phone-for-30-seconds habit I’ve been in. I actually think that simply seeing the password prompt reminds me “oh yeah, I’m trying to break this habit” and is enough to get me to put my phone away. Then when I make a conscious decision to use an app, I have no problems putting in that long password. Some apps I haven’t even used since I implemented this, and others (like the aforementioned Instagram) I’ve only been checking maybe once or twice a day. Another bad habit is on its way to being broken!

Thanks to 55Laney69 for the image.

Cleaning Up Your Social Media Life

June 22nd, 2012 | No Comments »

I love social media. I’ll jump on-board anything – whether it’s the big guns like Facebook and Twitter to the smaller services like Path or Cinemagram. More and more social media services are adding a feature I’m just not into, and that’s cross posting to other services. Almost every service now has a function to post to Twitter and Facebook at the least. This leads to me seeing the same Instagram pic (for instance) over and over again as I make my morning social media rounds. It’s kind of annoying!

So how do we clean this up? Well Facebook has a feature to hide posts from a service. In our Instragram example, you simply hover over the top-right corner of the post, pull down the menu and pick “Hide all by Instagram”. Pretty simple.

The other big offender – Twitter – is a bit more difficult. It’s a huge dumping ground for Instagrams, Tumblr posts, Foursquare, and just about every other social media cross-posting. This is what I’d like to focus on today.

We’re going to be looking for Twitter apps with a few key features:

  • Application muting: This is the big one. When someone posts to Twitter, it includes the app or service they posted from. We need an app that will filter out our big offenders by name.
  • Keyword filtering: This will filter out any keyword we put in. Useful for silencing certain hash tags, Justin Bieber babble, or URLs and other service footprints that we can’t filter out with the application muting function.
  • User muting: Completely silence a user without unfollowing them. This is useful if you need to ignore someone for a period of time (like during March Madness) or if you don’t want to offend a friend by unfollowing them.


Unfortunately, Application Muting is pretty hard to come by. The other two are a bit more common. Here are some of the apps I’ve found that have these features.


I haven’t been able to find a free app that meets our requirements yet, but here’s two paid apps that do the job.

Tweetings is what I personally use on my iPhone. An all around slick client with a lot of customization, and a lot of options for cleaning up our feed. Highly recommended. $2.99 (An OS X version is available as well, although I haven’t tried it.)

TweetAgora offers a lot of different ways to filter tweets, as well as a feature called an “agora” which allows you to filter tweets you want to see – kind of the opposite of what we’re looking for, but a cool feature nonetheless. The full version has application filtering, but it’s unclear whether you can set your own or only use the built in filters. $4.99


Plume has been the only app I’ve found so far that meets our requirements. Thankfully, it’s pretty great, with a lot of features – and the best part is it’s free!

Desktop (Cross-Platform)

I don’t use Twitter on the desktop much, but a promising client I’ve been keeping my eye on is Hotot.  I’m pretty OS-agnostic (frequently switching between Windows, OS X and Linux) and Hotot works on them all. It’s still under development, but working versions are available. It’s lightweight and, best of all, has a full featured filtering system.

There’s many other desktop clients out there that I haven’t gotten to yet. I’ll update this post when I find some with advance filtering.

Do you know of a client (for any platform) that fits the bill? Let me know in the comments!

Thanks to www.avatar.co.nz for the image.

Is the iPod Classic Dead?

September 28th, 2011 | 1 Comment »

Every September for several years, Apple has held an iPod-centric event to introduce new iPods. This year, there’s been no such event, and its seeming that the iPods are being ignored this year. Most interesting to me are the lack of updates to the iPod classic. It was bumped up to a 160GB version 2 years ago, and hasn’t been updated since. As the iPod turns 10 years old next month, I have to wonder: is the iPod that floated Apple through most of the 2000s nearing its end?

I love my iPod classic. I use it for several hours a day as I listen to music and podcasts. With 160GB, I can carry my entire music collection with me. There’s not another MP3 player in Apple’s line that’s close to this capacity. The largest iPhone holds 32GB and the iPod Touch 64GB. In addition, I find the iPod classic interface to be near perfect. The simplicity of the scroll wheel is a huge reason I think other MP3 players have never really caught on.

I think Apple may be going in one of two ways with the iPod classics:

1. Leave it as it is for the time being. Can the iPod really be improved much? Aside from maybe increasing the capacity (which really doesn’t seem to be in demand) what could be added or changed? The design is simple and works great. Maybe Apple views the iPod Classic with the “if it ain’t broke don’t fix it” mentality.

2. Kill the iPod Classic and push towards iCloud and iOS devices. Apple is really pushing toward a unified iOS platform. They want you to be able to sync all your data between all your iOS devices. I could see Apple pretty much saying “buy an iPhone and put your music in the cloud.” I have no doubts that storing music in the cloud may be the future, but is it ready yet? I can’t imagine being on a long road trip and relying on a 3G signal to stream music/podcasts. And as long as the upper tiers of iCloud are at the prices they are (50GB for US$100) I can’t see iPod Classic users being swayed to this direction.

If the iPod Classic is axed from Apple’s offerings, what then? Will iPod Classic users move to another company’s device? Will the Zune finally catch on? Will Apple hold onto their scroll wheel patents without implementing it? I’m just hoping my beloved iPod Classic hangs on for a few more years.

VMWare: Backup virtual machines from the command line

August 31st, 2011 | No Comments »

I have a proper backup system in place for my virtual machines in VMWare, but sometimes I want to make a quick clone of the entire machine while its running and dump it on an external drive,  burn to DVD, etc. Here’s the method I use for that:

1. Make a snapshot of your VM.

2. SSH into an ESX host.

3. su root

4. cd /vmfs/volumes/ then cd into the volume the VM is located (this can be found in the Options tab of the Virtual Machine Properties if you’re having trouble locating the path)

5. /usr/sbin/vmkfstools -i vmname.vmdk -d 2gbsparse backup_vmname.vmdk (replacing vmname.vmdk with the correct filename.) This command copies the machine and splits it into 2gb chunks, which is useful for storing on FAT32 file systems or splitting across several DVDs.

6. Once the command finishes, copy the backup files using Browse Datastore in vSphere or by SFTPing into one of the ESX hosts. (Since you did this as root, you may need to chmod 777 the backup files before copying them.)

That’s it! My next step is to incorporate this into some scripts, so I automatically backup and download the vmdk onto my box.

Virtual to Physical Migration

August 23rd, 2011 | No Comments »

We’re currently in the process of moving our data center, and half of our servers are going to one place and the other half to another. One of the problems I’ve run into is our VMWare installation – it’s being moved along with almost all the VMs but one, which needs to go to our location down south. Unfortunately they don’t have a VMWare installation there, so they require a physical machine. My task was to migrate this virtual machine to a physical machine, in a very small amount of time.

VMWare provides a whitepaper (PDF) on doing a V2P migration that involves Microsoft Sysprep and Ghost. While I’d like to try this method, time was of the essence and I needed a quick way to do this. I decided to use Acronis Universal Restore. This is a handy program that will restore an Acronis image to a different set of hardware.

Here’s how I went about it:

  1. Cloned the VM that I wanted to migrate to a physical machine. This machine had to stay up while I was doing the migration, so this allowed me to work on the clone from here on out.
  2. In the Virtual Machine Properties, I changed the network adapters to be disconnected, as to not cause an IP/hostname conflict with the running machine if the VM were accidentally booted into Windows. I also hooked up an empty virtual disk on which to place the image on. (Acronis also supports dumping the image on a network share if you’d like to go that route.)
  3. Made an ISO of Acronis, and booted the VM into it. Imaged the drive and placed the .tib file on the empty virtual disk.
  4. Transferred the image to a USB hard drive.
  5. Unboxed the server and popped the drives in. Hooked up the USB hard drive containing the image, and booted into Acronis.
  6. Picked my image file. Selected “Universal Restore” in the options.
  7. Once the imaging was done, rebooted. The server came right up into Windows! There were a lot of drivers missing, but that’s to be expected.
  8. Fished the driver CD out of the server box and installed all the drivers. No problems here.
  9. Assigned the server a new IP and new hostname, plugged it into the network and tested it. Worked flawlessly!
All in all, the whole process took under 4 hours – the majority of which was the imaging and restore. (This was with about 10 gigs of data – times will vary depending on the size of the image.)


iPod Touch Tracking

April 27th, 2011 | No Comments »

Recently it’s been revealed that the iPhone tracks and stores location info. Much has been written about this, and its privacy implications, so I won’t rehash any of that. Since I don’t have an iPhone, I can’t use the program that maps out all the location info stored in the iPhone. I do however have an iPod touch. It’s a 2gen, but it’s running iOS4 which seems to be when the location tracking started. Now obviously iPod Touches don’t include GPS, but surprisingly they do some some location capabilities. To see what’s being tracked, first we need to understand how iPod Touches track locations.


When I first got my iPod touch and loaded up the mapping app, I was surprised that it found my current location. I immediately did some research to see how this was possible. Until April 2010, Apple used a company called Skyhook. Skyhook maintains a database of WiFi access points and their latitude/longitude. So when you connect to an access point, it finds the MAC address in the database and pinpoints your location. How do they get this data? They simply drive around the country and log it all. In April 2010, Apple switched to using its own location data. It appears Apple get this information from crowd-sourcing iPhone users. When they connect to a WiFi point, it sends the MAC and latitude/longitude of the access point (gathered from their GPS) back to Apple for inclusion in their database.


So that brings us to the location database inside the iPod Touch. I extracted the consolidated.db file from my iPod and opened it with SQLite Manager. I found the CellLocation table that what everyone was initially concerned about. This is where the GPS information would be. Of course, it’s empty on the iPod Touch. More interestingly, theres the WiFiLocation table. It contains the MAC address of every WiFi point I’ve ever connected to or been near, along with a timestamp and the latitude and longitude. I checked everything out – the timestamps start in June 2010, when I upgraded to iOS 4 and this info began to be tracked.

Today, Apple released a statement saying it’ll be addressing some of the location tracking concerns in its next update of iOS. From Security Week:

Apple also said that over the next few weeks it would release a software update for iOS that would reduce the size of the crowd-sourced Wi-Fi hotspot and cell tower database cached on the iPhone, cease backing up the cache, and delete the cache entirely when Location Services is turned off. Additionally, Apple said that in the next major iOS software release the cache would be encrypted on the iPhone, though a timeline for that was not provided.

I’ll be interested in digging into these databases after the update and see what’s all been changed.


Kindle Popular Highlights Bookmarklet

March 19th, 2011 | No Comments »

If you’ve used a Kindle and have “Popular Highlights” turned on, you may have noticed passages automatically underlined as you’re reading. It’ll usually say something like “80 highlighters”. This feature weighs what all other Kindle readers are highlighting in that book, and it’ll highlight it for you. At first I thought this was pretty weird, but I’ve grown to like it. I recently found out anyone can access this data from the web at kindle.amazon.com.

Unfortunately, there’s no link from the regular Amazon book pages to the Kindle page – so if you’re browsing Amazon in your browser and want to see the popular highlights, you have to go to kindle.amazon.com and search for it. This frustrated me, so I wrote a small bookmarklet to take you from the Amazon book page to the Kindle page, which includes the popular highlights.

To use it, just drag this link into your bookmark bar: Kindle Popular Highlights

Or use this code:

javascript:var kindle=location.href.replace(‘www.amazon.com’,’kindle.amazon.com/work’);

Unfortunately, it looks like Amazon only provides 10 highlights. This is understandable, because otherwise you could just read the best bits of a book! (Which would be great for a lot of non-fiction books.) I do however wish I could easily view all the popular highlights if I bought the book.

Kerberos Token Size

March 18th, 2011 | No Comments »

A couple of years ago a user got an error when opening Outlook 2003:

Cannot start Microsoft Office Outlook. Unable to open the Outlook window. The set of folders could not be opened. The server is not available. Contact your administrator if this condition persists.

Outlook would then immediately close. I verified that the server could be reached (it could).  After trying the basic things (like recreating the profile, checking the account for expired passwords, permissions issues) I Google the error. It looked like this was a common error, but everything that people suggested were what I had already tried. I struggled with this for some time, trying everything I could thing of and trying to at least narrow down the problem. No luck.

Until I removed the user from most of his groups.

It immediately started working. I added the groups back, and I got the error again. Bingo! Now I had another search term to research. After combing through more Google results, I found out about the Kerberos token size. Microsoft explains how this works in this unrelated KB article:

The Kerberos token has a fixed size. If a user is a member of a group either directly or by membership in another group, the security ID (SID) for that group is added to the user’s token. For a SID to be added to the user’s token, it must be communicated by using the Kerberos token. If the required SID information exceeds the size of the token, authentication does not succeed. The number of groups varies, but the limit is approximately 70 to 80 groups.

So not only does each group add it’s SID to the token, each nested group does as well. Our Active Directory has some notoriously nested groups, so this would drive up the token size, even when the user didn’t appear to be in an excessive about of groups.

Windows 2000’s maximum token size was 8000kb. In SP2 this was bumped up to 12000kb, and AFAIK that’s where it has remained, even in Windows 7. Luckily, you can increase the MaxTokenSize to 66535k (about 5 times the size). Here’s the registry key change:


I keep this in a .reg file and run it whenever a new user has a problem. A quick reboot, and its good to go. For what I’ve read, it seems the low Kerberos token size can affect a lot more than just Outlook. I’ve read of people having problems with GPOs not being applied, for instance. I wonder why Microsoft hasn’t increased this by default yet, and what downsides – if any – are caused by increasing it.

If you’d like to read more about Kerberos token sizes and how they work, Microsoft provides an interesting whitepaper (.doc).