Log in
R. Kris Hardy Photo

R. Kris Hardy

June 26, 2013

Fedora: Boot fails with “unit basic.target may be requested by dependency only”

Filed under: System Administration — Tags: , , , , , — Kris @ 10:55 pm

I recently applied updates using yum update on my Fedora 18 workstation, and my system hung during the next boot with the message “unit basic.target may be requested by dependency only.” I was unable to enter debug mode or boot into single user mode.

The problem was that my grub2.cfg file was rebuilt during a kernel upgrade, and it grabbed the wrong kernel as the default. I had upgraded my computer about a month ago from Fedora 17 to 18 using fedup, and fedup had left behind the /boot/vmlinuz-fedup and /boot/initramfs-fedup.img files. When grub2.cfg was rebuilt today (presumably by calling grub2-mkconfig -o /boot/grub2/grub.cfg), the first kernel that it found was /boot/vmlinuz-fedup. This failed during boot because this kernel bootstraps an fedup upgrade using files that have been previously downloaded. A file it was expecting could not be found, so the kernel halted.

The Fix

1. When booting, select a different kernel in the grub2 menu.

2. If it boots, log in as the root user. If it doesn’t boot, reboot and try a different kernel, or boot from a boot disk.

3. Delete the /boot/vmlinuz-fedup and /boot/initramfs-fedup.img files.

4. Run grub2-mkconfig -o /boot/grub2/grub.cfg

(If you have a UEFI system, you will probably have to run grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg instead, or symlink /boot/efi/EFI/fedora/grub.cfg to /boot/grub2/grub.cfg)

You should see a list of kernels that grub2-mkconfig found, and vmlinuz-fedup should NOT be in that list.

See https://fedoraproject.org/wiki/GRUB_2 for some ideas for dealing with other grub2 issues you might run into.

5. Reboot and see if that fixes your problem.

Technorati Tags: , , , , ,

June 8, 2013

Amanda backup configuration update causes heavy disk IO by sendmail on Fedora 18

Filed under: System Administration — Tags: , , , , , , , — Kris @ 8:33 pm

I recently migrated my workstation from Fedora 17 to Fedora 18, and so far have been happy. However, I noticed that my disk started spinning like mad and things seemed really sluggish at times. It has also getting progressively worse over the last week, so I finally decided that I needed to stop wrapping my “blanky” around my head and telling myself that everything was OK, and actually get to the bottom of the issue.

So, first things first…

I checked top and found that sendmail was consuming about 4% of CPU time. Not huge, but definitely abnormal. The big giveaway though was that my IO Wait was 85%! I’m amazed that I was able to do anything at all!

Next stop was a look at iotop, which showed that I had two things that were consuming the disk time: tracker and sendmail.

I decided to take care of tracker first, which I disabled once and for all by disabling all tracker components in gnome-session-properties, followed by:

tracker-control --hard-reset --remove-config

(Thanks to dd_wizard at FedoraForum.org for the tip on tracker-control)

I had previously disabled tracker, but I didn’t realize that I also had to wipe out the tracker config files in ~/.config/tracker.

Now that tracker was disabled, I checked iotop again and found that [jdb2/sda4-8] was the culprit. This is the ext4 journaling process, so I started doing some trial and error to determine what was causing the large number of disk writes that would cause such a high journaling load. Seeing that sendmail was at 4% CPU utilization, I killed it using service sendmail stop and the [jdb2/sda4-8] dropped to practically 0% IO utilization, and the % io wait went to approximately 0%.

Now that things were under control, I restarted sendmail using service sendmail start, and the IO wait problems immediately came back. Sendmail was doing some heavy disk IO, so I killed it again and decided to dig into the mail queue.

I used the command sendmail -bp and found that I had 25,000 messages in the mail queue. What the heck? Did I get spam malware on my computer somehow, or is sendmail being used as a relay somehow? I checked the messages, and found that they all were either emails to admin1@ or admin2@, or rejection or delay notices stemming from these emails. All the emails to admin1 and admin2 contained details about an Amanda backup server that I run on my workstation. What the heck was happening?

I took a look at the amanda.conf file in my backup set, and found a new section near the bottom of the file from the last update to amanda-server.x86_64 version 3.3.2-3.fc18.

define interactivity inter_email {
plugin "email"
property "mailto" "admin1" "admin2"
property "resend-delay" "10"
property "check-file" "/tmp/email_input"
property "check-file-delay" "10"
define interactivity inter_tty_email {
plugin "tty_email"
property "mailto" "admin1" "admin2"
property "resend-delay" "10"
property "check-file" "/tmp/email_input"
property "check-file-delay" "10"

This turned me onto the man page about amanda-interactivity. (It’s also available via man 7 amanda-interactivity

Based on that configuration, my computer was sending an email to both admin1@ and admin2@ every 10 seconds to let me know that no acceptable volumes were found in my chg-disk virtual tape changer. Since there were no admin1 or admin2 addresses to accept the email, sendmail got a rejection from GMail (I’m using Google Apps), which it then would attempt to send to the amandabackup user, which I had aliased to my email address. Based on the volume of emails, rejections, and deferral notices, GMail began to throttle the number of messages that they would accept from my IP address because they thought that I was sending spam. This caused the sendmail queue to continue to grow to over 100M, which I assume causes a significant processing burden for sendmail as it continuously attemps to send and update the queue. This apparently brought the entire computer to its knees.

I ultimately fixed this by modifying amanda.conf, setting mail-to to my email address, and resend-delay to 0 to allow it to only send 1 message to me.

I then cleared the sendmail queue using the following commands as root:

# cd /var/spool/mqueue
# grep -l "No acceptable" * | xargs -I {} rm {}
# grep -l "admin1" * | xargs -I {} rm {}
# grep -l "admin2" * | xargs -I {} rm {}
# grep -l "amandabackup" * | xargs -I {} rm {}

While I might have lost a few emails from amandabackup (my amanda user), that was OK.

I then started sendmail using service sendmail start and forced it to flush the cache using sendmail -q -v and watched for any errors.

A final check with iotop and top showed that everything was back to normal. Whew…

Technorati Tags: , , , , , , ,

May 8, 2012

Building pywin32-217 from source

Filed under: Development — Tags: , , — Kris @ 9:57 pm

Below are the steps that I use in order to build pywin32-217 on Windows 7:

  1. Download the latest pywin32 code from Sourceforge using Mercurial:

    hg clone http://pywin32.hg.sourceforge.net:8000/hgroot/pywin32/pywin32

  2. Install MS Visual Studio 2008 Standard. The Express edition won’t work because the afxres.h header is missing. I’ve tried some other people’s suggestions, but nothing I found works reliably.

  3. Download the Windows 7 SDK with .NET Framework 4, Version 7.1

  4. Download the DirectX SDK

  5. I haven’t gotten this to work yet, nor do I use Exchange… Download the Exchange Server 2000 SDK (The only link I could find was on CNet)

    Also, here’s the link the Microsoft’s Exchange 2000 SDK Developer Tools. I’m not yet sure if this is needed.

  6. To build pywin32 using Microsoft Visual C++ 2008 Standard Edition, do the following on the command prompt inside the pywin32 directory:

    set MSSdk=c:\Program Files\Microsoft SDKs\Windows\v7.1
    set PATH=%PATH%;%MSSdk%\Bin
    python setup.py build -c msvc
    python setup.py install --skip-build

Technorati Tags: , ,

July 2, 2011

Getting Started with Git and SVN

Filed under: Articles,Development — Tags: — Kris @ 9:32 am

Getting Started with Git and SVN

What is Git, and why would you want to use it?

Git is a distributed version control system that was written by Linus Torvalds (the developer of the Linux kernel). “Distributed” means that the entire revision history is stored on your local machine, which allows you to manage commits, rollback changes, merge branches, tag, etc., all without having to maintain a constant connection to the central svn server.

This also allows you to use very non-linear development methods, and incrementally commit changes and publish those changes on SVN in one batch once you know that everything that you’re working on works.

The other really nice part of git is that you can branch your code in-place on your local machine, meaning that you can branch the trunk and make multiple branches (and even branches of branches) to try things out and see what works best. Once you have your changes in place, you can merge the branches back together and commit them to svn.

I’ve has been using this for my development starting in May 2011, and I’ve been really happy with it, although it takes a little time to get used to. This page is working document of some of the tricks, lessons, and best practices that I’ve picked up as I’ve started to use git. (BTW, I’m so happy with git that all my personal development projects have been entirely migrated to git both on the workstation and my server).

If you have any questions, make sure to post a message below. I continue to add to this tutorial so that I have a central place to store my working knowledge of git svn.


Introductory Material (or reference material if you get stuck)

To start with, here is a cheatsheet of git commands, and how they relate to svn. Now, git can do a LOT more than this, but it will help you get started with understanding how git works.

For a more lengthy introduction to git, read the Git Tutorial and then the Everyday GIT with 20 commands or so page.

Typical Workflow with Git and SVN

Before you start this tutorial, if you want to also try committing to a svn server, send me an e-mail at kris@rkrishardy.com, and give me your e-mail address so that I can give you commit access. (I just ask that you be considerate with what you commit, and don’t commit anything that is not tasteful).

To get started, first, download and install Git. If you are on Windows, download Git from http://git-scm.com. If you are on a Unix/Linux variety, use your repository or package manager.

On Fedora/RedHat/CentOS:

$ sudo yum install git git-svn

On Debian/Ubuntu:

$ sudo apt-get install git-core git-svn

For FreeBSD, check out the instructions for installing packages and ports, depending on your preference.

Now, let’s use the git-svn-sandbox project at code.google.com as an example to show you how this works.

The directory that you want to clone from svn is the one with the trunk/tags/branches directories inside of it. For example, let’s clone git-svn-sandbox.

$ git svn clone https://git-svn-sandbox.googlecode.com/svn/ git-svn-sandbox -Ttrunk -bbranches -ttags --username your@emailaddress.com
This may take a while on large repositories
Checked through r...
Checked out HEAD:
http://code.google.com/p/git-svn-sandbox r...

This will create a git-svn-sandbox folder and clone the entire svn history into the git-svn-sandbox/.git folder. This may take some time because it needs to walk the entire svn revision tree, so grab some tea or coffee and give it a few minutes.

Once the cloning is done, you should have a local ‘master’ branch in place:

$ cd git-svn-sandbox
$ git branch
* master

The ‘*’ shows you which branch you currently have checked out. You should currently be in the “master” branch.

Now, see what remote tracking branches are available (these are on the SVN server):

$ git branch -r

git branch is the command to use for managing local branches. Using git branch, you can create, delete, merge, etc. your local branches. (These are isolated from the SVN remote branches, and branches you create here are not automatically uploaded to the SVN server, so create and delete branches without fear).

SVN doesn’t handle conflicts smoothly, so here’s what I do. Before I start changing anything, I first create a new local branch and start working on that. If you forget to branch, that’s OK. If you get a conflict during the git svn rebase or git svn dcommit, see the “I did an git svn rebase, but I got a bunch of merge errors. What should I do?” section below.

$ git checkout -b my_branch
Switched to a new branch 'my_branch'

You have created a new branch and switched to it. This command was
shorthand for:

$ git branch my_branch
$ git checkout my_branch

Now, and and/or edit the files you need to. When you are done, take a
look at the list of files that changed.

$ git status
# On branch my_branch
# Changes not staged for commit:
# (use "git add ..." to update what will be committed)
# (use "git checkout -- ..." to discard changes in working
# modified: test.txt
# Untracked files:
# (use "git add ..." to include in what will be committed)
# abcd.123
no changes added to commit (use "git add" and/or "git commit -a")

Here, text.txt is already in the git index and has been modified but not flagged to be committed. Also, abcd.123 has been added, but is not in the git index.

To add both to the git index, use git add -A (which adds all untracked and modified files to the index).

$ git add -A

Now you can commit the files to the git branch using git commit.

$ git commit
[my_branch db1292d] Saving changes
2 files changed, 3 insertions(+), 0 deletions(-)
create mode 100644 abcd.123

When you are done developing and you want to share your code on the svn server, make sure that you have all the files checked into your development branch.

$ git status

Add and commit any changes that show up, and then checkout the master branch.

$ git checkout master
Switched to branch 'master'

If you didn’t remember to do your development in a different branch, and you made modifications in master, follow the instructions in “I did an git svn rebase, but I got a bunch of merge errors. What should I do?” if you run into any problems with the next set of steps.

Rebase your master branch with the svn server.

$ git svn rebase
Current branch master is up to date.

If any files had been committed to svn by other developers, you should see the changes applied in the output of git svn rebase.

Merge in the changes that you want to commit to svn.

$ git merge temp
Updating 3877696..f68e10c
test.txt | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 test.txt

If you see any conflicts in the output, you can find the conflicts by using git status.

$ git status

If you want to diff the conflicts, use git diff.

$ git diff

Once the conflicts have been resolved, make sure that all the changes have been committed to master. Then commit the changes to svn using git svn dcommit.

$ git commit
$ git svn dcommit --username your@emailaddress.com
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
A test.txt
Committed r4
A test.txt
r4 = 4c55148ecefac8b83b626fa568b2bc1f99ea990e (refs/remotes/trunk)
No changes between current HEAD and refs/remotes/trunk
Resetting to the latest refs/remotes/trunk

If git svn dcommit failed with a message stating “Authorization failed: …”, I need to give you commit access to the repository. Please send me an e-mail at kris@rkrishardy.com, and give me your e-mail address so that I can give you commit access. (I just ask that you be considerate with what you commit, and don’t commit anything that is not tasteful).

If you got the “Committed r…” line, congratulations! Your changes have been published to the svn server!

If you want to continue working on your local development branch, check it back out and rebase it to bring it up to date with the master.

$ git checkout my_branch
$ git rebase master
First, rewinding head to replay your work on top of it...

When you are completely done with your branch, you can delete it by checking out a different branch and using git branch -d.

$ git checkout master
$ git branch -d my_branch
Deleted branch my_branch (was 15971cb).

Congratulations. You now have the basic workflow down. If you run into problems, checkout out the “Troubleshooting” section below, or take a look at the Git documentation at Git SVN Cheatsheet, the
Git Tutorial, the Everyday GIT with 20 commands or so page, or the extensive Git Users Manual.


I did a git svn rebase, but I got a bunch of merge errors. What should I do?

Before you do anything, revert the rebase by typing:

$ git rebase --abort

Now, make a backup of your current master by branching it.

$ git branch temp

Use git log to figure out which commit was the last one that was sync’d with svn. Here is an example of the output:

$ git log
commit 4ebf8122c5b18ab16167268b1791fa49996e56cc
Author: Kris Hardy
Date: Sat Jul 2 10:01:55 2011 -0400

Modified test.txt

commit ec8a9e15a30c01c7ad7b83d6cb8cde39e1c6a650
Author: hardyrk@gmail.com
Date: Sat Jul 2 13:58:02 2011 +0000


Adding initial files

git-svn-id: https://git-svn-sandbox.googlecode.com/svn/trunk@2 fb457a7e-32b5


commit 96acefc8a443a02568453c7a504064ffc6d8428e
Author: (no author) <(no author)@fb457a7e-32b5-5db0-b168-d315aa6739ac>
Date: Sat Jul 2 13:12:00 2011 +0000

Initial directory structure.





git-svn-id: https://git-svn-sandbox.googlecode.com/svn/trunk@1 fb457a7e-32b5

Look for the latest git-svn-id to find the latest commit. In this example, it
was commit ec8a9e15a30c01c7ad7b83d6cb8cde39e1c6a650.

You only have to use the first few characters of the commit id, so we’ll
use ec8a9 as a short-hand for the full commit string.

Now, reset the master branch to the last version that was sync’d with
SVN. NOTE: This command will erase commits that occurred after the
commit, so make sure you backed up your master branch (using git branch)!!!.

$ git reset --hard ec8a9
HEAD is now at ec8a9

Now, you are safe to rebase master with the svn trunk using git svn rebase.

$ git svn rebase
Current branch master is up to date.

If any files had been committed to svn by other developers, you should see the changes applied in the output of git svn rebase.

Merge in the changes that you want to commit to svn.

$ git merge temp
Updating ec8a9..4ebf8
test.txt | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 test.txt

If you see any conflicts in the output, you can find the conflicts by using git status.

$ git status

If you want to diff the conflicts, use git diff.

$ git diff

Once the conflicts have been resolved, make sure that all the changes have been committed to master. Then commit the changes to svn using git svn dcommit.

$ git commit
$ git svn dcommit --username your@emailaddress.com
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
A test.txt
Committed r4
A test.txt
r4 = 4c55148ecefac8b83b626fa568b2bc1f99ea990e (refs/remotes/trunk)
No changes between current HEAD and refs/remotes/trunk
Resetting to the latest refs/remotes/trunk

I checked out an svn branch (or tag), and how svn dcommit in master pushes the changes to the svn branch!

When checking out a remote branch or tag, master sometimes stops tracking the svn trunk, and instead starts tracking the svn branch or tag. To fix this:

  1. Check the git master branch back out
  2. Back up any changes
  3. Do a hard reset, setting the svn trunk as the remote tracking branch
  4. Reapply the backed up changes

$ git checkout master
$ git branch temp
$ git reset --hard remotes/trunk
HEAD is now at ...
$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
$ git merge temp
Updating ...
$ git branch -d temp
Deleted branch temp (was ...
$ git svn dcommit
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk

Adapted from StackOverflow: How do I make git-svn use a particular svn branch as the remote repository?

Stupid Git-SVN tricks

I want to ignore files from git (just like setting ‘svn:ignore’)

To ignore files from git, add the filename or filename pattern to a .gitignore file in the folder that the file is in, or in one of the folder’s ancestors if it is a global ignore (such as ignoring *.o or *~ files, for example).

$ touch .gitignore
$ echo *.o >> .gitignore
$ git add .gitignore
$ git commit -m "Ignoring *.o files"

How do I create a tag in SVN?

Use git svn tag.

$ git svn tag 1.0
Copying https://git-svn-sandbox.googlecode.com/svn/trunk at r4 to https://git-svn-sandbox.googlecode.com/svn/tags/1.0...
Found possible branch point: https://git-svn-sandbox.googlecode.com/svn/trunk => https://git-svn-sandbox.googlecode.com/svn/tags/1.0, 4
Found branch parent: (refs/remotes/tags/1.0) a029adfba661303ad73be48fcbb92372fba9a9f1
Following parent with do_switch
Successfully followed parent
r5 = 974647bc0548d3d21c67d7f6fe2904da2f02a846 (refs/remotes/tags/1.0)
$ git branch -r

If you want to check to make sure that master is still tracking the svn trunk, use git svn dcommit -n.

$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...

How do I create a branch in SVN?

Use git svn branch.

$ git svn branch temp
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
$ git svn branch temp
Copying https://git-svn-sandbox.googlecode.com/svn/trunk at r4 to https://git-svn-sandbox.googlecode.com/svn/branches/temp...
Found possible branch point: https://git-svn-sandbox.googlecode.com/svn/trunk => https://git-svn-sandbox.googlecode.com/svn/branches/temp, 4
Found branch parent: (refs/remotes/temp) a029adfba661303ad73be48fcbb92372fba9a9f1
Following parent with do_switch
Successfully followed parent
r6 = 4c0195271d0d586f2facd2707a83653844fec3a2 (refs/remotes/temp)
$ git branch -r

If you want to check to make sure that master is still tracking the svn trunk, use git svn dcommit -n.

$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...

How do I checkout/track a remote branch or tag from SVN, and manage the files using Git?

You can create a git branch, and have it track the remote svn branch using git branch. The git branch will now be synchronized with the svn tag or branch.

$ git branch local/tags/1.0 tags/1.0
$ git branch
* master
$ git branch -r
$ git checkout local/tags/1.0
Switched to branch 'local/tags/1.0'
$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/tags/1.0 ...

My branches are forks of the svn trunk. How can I keep the forks up to date without the conflicts caused by git merge?

See this page for a great walkthrough (hint, use git rebase –onto):

Maintaining a Fork With Git

Technorati Tags:

December 4, 2010

A cat that falls from 100+ feet gets hurt less than one that falls from only 50 feet?

Filed under: Random Thoughts,Videos — Tags: , , , , — Kris @ 5:19 pm

I heard a great story on RadioLab the other day, and I just had to submit the following experiment idea to MythBusters

The Myth:

Cats that fall from the top of a 32-story building get hurt less than cats that fall from the top of a 7-story building.

The Story:

Two veterinarians, Wayne Whitney and Cheryl Mehlhaff, who worked at the Midtown Veterinary Hospital, noticed that in there were a lot of cats that fell from window ledges and roofs on tall buildings in Manhattan. When they studied the data, they found that cats that fell from the 1st through 5th floors (approx. < 50 ft) were often lightly injured, and cats that fell from the 10th floor (> 100 ft) and higher were also lightly injured. However, the cats who fell from between the 5th and 10th floors (approx 50-100 ft) tended to get seriously injured.

Why was this?

One theory is that the data set is completely tainted, so the conclusion that the veterinarians drew was incorrect.

Another theory is that when cats reach terminal velocity, they relax and spread their body out like a flying squirrel. When they finally hit the ground, they belly-flop, spreading out the force of the impact across their entire body. The cats that fell from the 1st-5th floors did not reach a high enough speed to receive serious injuries. The cats that fell from the 10th story and higher were able to reach terminal velocity and relax. Those cats that fell from between the 5th and 10th floors were, by this theory, not able to reach terminal velocity and assume the “flying-squirrel” pose. These unlucky cats likely landed on their legs, breaking them.

The Background:

I had heard this myth about falling cats before, and I was reminded of the story when they brought up on the “Falling” epiode on the NPR/WNYC radio program “RadioLab”, and again in a follow-up episode “Gravitational Anarchy”.

The RadioLab episode “Falling” is here:
RadioLab Falling Episode

The hilarious follow-up (RadioLab Podcast Short) is here on their “Gravitational Anarchy” episode that was released on Nov 29th 2010:
RadioLab Gravitational Anarchy Podcast Short

There is also an editorial write-up about this on HowStuffWorks.com
How Cats Survive Falls @ HowStuffWorks.com

What does a cat really do in free-fall/Zero-G?

It looks like, in a Zero-G environment, the cat just flips around and can’t get its bearings, but it landed feet-first on every single surface. When gravity starts being applied, the cat was quick to flip around and land feet-first. It looks like the cat uses the visual cue of the approaching ground, more than any other indicator, to know which direction he/she needed to point. But when the cat was just floating, it began spinning in circles and couldn’t seem to get his/her bearings.

Now I’m wondering how quickly they orient themselves in a long fall. Do they flail around until they get closer to the ground, or can they orient themselves quickly using other stimuli such as wind resistance?

Peer-Reviewed Scientific Papers

I just found the abstract from the Journal of the American Veterinary Medical Association that they were talking about in the RadioLab “Falling” and “Gravitational Anarchy” episodes:

High-rise syndrome in cats (Whitney et al)

There is also a more recent article published in the Journal of Feline Medical Surgery in 2004:

Feline high-rise syndrome: 119 cases (1998-2001) (Vnuk et al)

The abstract for the second paper () seems to contradict the theory in the first (that cats falling 9+ floors are injured less than those falling 4-8). Although, it doesn’t quite make that distinction, so I’ll have to read the paper.

I also found a list of articles that describe the physics and medical outcomes of falling bodies, cats and otherwise:

Falling Bodies – The Physics Hypertextbook

I’ll have to pull these articles next time my wife has plans and I can sneak away to hit the stacks at my local vet college.

Want to see this experiment on MythBusters?

If you think this would be an interesting MythBusters episode, make sure you comment on the discussion on the MythBysters forum!

As I find out more about this story and check through the papers, I will come back and update this post.

Technorati Tags: , , , ,

April 14, 2010

Testing an Enterprise Application Integration (EAI) Implementation

Adapted from the following presentation:

“Testing an EAI Implementation”

Matt VanVleet
VP Product Development and Practice Management
Pillar Technology

Enterprise Application Integration Alliance
Columbus, Ohio

Before I begin, I want to give my thanks to Matt VanVleet of Pillar Technology for giving us his time, knowledge and experience for his presentation.  Most of the following article is directly from, or inspired by, what he shared.

I also want to thank the members of the Enterprise Application Integration Alliance for attending.  Without them, valuable presentations like Matt’s would not be possible.

Now to the topic at hand…

If you look at the architecture of a standard Enterprise Application Integration (EAI) implementation, it typically consists of a series of independent applications or systems that are loosely-coupled to one another through a central integration system (such as an Enterprise Service Bus).

EAI projects usually consist of 4 phases, each of which create unique challenges to the enterprise:

  1. Implementation
  2. Upgrade the EAI
  3. Upgrade the Back-end Systems, Data Warehouse or Applications
  4. Upgrade a 3rd Party Integration

1. Implementation

When EAI projects are started, the EAI team typically works closely with the teams that are in charge of each application that needs to be integrated.  Each team needs to cooperate with the development, documentation, and testing.  They also need to coordinate releases so that the integration does not break if some critical component or feature of the application or the EAI system itself is being upgraded.

Since so many teams are involved and the implementations are complex, the cost of implementing an EAI system are high.

In addition, since releases need to by sync’d across the enterprise, it slows down the speed of development and adds at least one additional layer of management.  Furthermore, while each endpoint is loosely-coupled to one another, it is tightly coupled to the EAI system itself.  Each EAI-to-endpoint integration creates a potential point of failure, so each change needs to be thoroughly tested before being deployed.

2. Upgrade the EAI

Once an EAI implementation is in place, upgrading the EAI can be as time consuming and expensive as developing the EAI in the first place.  Each team has to brought back together the plan, develop and synchronize the upgrade.

3. Upgrade the Back-end Systems, Data Warehouse or Applications

Some of the teams will be needed for this upgrade, including the EAI team and the team in charge of the application being upgraded.

4. Upgrade a 3rd Party Integration

The EAI team will be very involved in this, and it may require the coordination and help of the 3rd party service provider that you are integrating with.


Since each of these systems is now tightly-coupled to the EAI system, upgrades at any point along the enterprise become major events.  One slight change, unless properly planned, managed and tested, can have a significant impact at some other end of the enterprise.

As the enterprise generates more and more data, the EAI implementation is put under more and more load.  What could be simple tweaks to the persistance layer or server configuration now involve much more risk.

Also, once an EAI implementation is in place, it never gets the attention that the applications themselves get.  Since EAI is behind the scenes, it tends to be an afterthought when it comes to the corporate budget.  It’s necessary, but it’s not “sexy”.

What is the best way to manage this?  How can you ensure that your EAI system is stable, while allowing the teams responsible for each application to be as autonomous as possible?  How can we decrease the upgrade cost of an EAI and also make it flexible enough to now slow down the application developers?


If you look at an EAI implementation as a central data system and don’t worry about how it works on the inside (which is a reasonable assumption when you are just dealing with interfaces), you can simplify it as a black box connected to a series of interfaces: Application-to-EAI, Data Warehouse-to-EAI, Web Service-to-EAI, etc.

At each interface, you can then divide the interface into two halves: 1) Application-side and 2) EAI-side

If you are an application developer, you are developing your application to work with the interface that the EAI implementers expose.  It could be a database socket, a web service, flat files, messaging, etc.  This is essentially the same for the Database Warehouse and Back-end systems (such as SAP).

At each interface, there is typically inbound and outbound traffic.  Some of the traffic will be the result of the application itself (such as adding a customer to a CRM which is then forwarded to the ESB), some of the traffic will be responses to data sent by the application (a reply from the ESB), and some of the traffic will sent without any aparent trigger (the ESB sends a message to the application, caused by new data from another application).

To ensure that your EAI implementation and the applications are communicating properly, you can put tests in place to make sure that the systems respond properly.  You can also develop simulated systems, also known as mocks, that act like the system that is at the other end of the interface.

These tests and mocks can help you during any stage in the lifecycle of your EAI implementation:

  1. During development – Test your applications and ESB, independently, to make sure that they respond to events in the right way.
  2. During production – Regularly test your applications and ESB to identify any mismatches or problems immediately.  With up-to-date tests, it is even possible to detect defects in the system that were somehow missed by a development team.  You can then respond proactively and disconnect the communication to the bad system, for example, to keep the other systems across the enterprise running properly.

Now, let’s use the internal application-to-EAI implementation interface as an example.

Application-to-EAI Testing

When you are developing an EAI or application to be compliant with the Application-to-EAI interface, there are 4 test assets that need to be developed in order to assure that both the application and EAI system are working properly.

  1. Mock of EAI Implementation – Used to test the application against an EAI system that responds correctly to requests and generates application-bound traffic.
  2. Test of Application interface – Used to test the application to make sure that it responds properly traffic from the EAI implementation.
  3. Mock of Application – Used to test the EAI system against a fake application that responds correctly to requests and generates EAI-bound traffic.
  4. Test of EAI interface – Used to test the EAI system to make sure that it responds properly to traffic from the application.

3rd Party-to-EAI Testing

If you are integrating with a 3rd party, the layout is the same.  However, there may be a larger barrier between the EAI team and the 3rd party team.  In that case, you probably will have to assume that their testing is in place, or work with them to ensure that they have their tests and mocks in place.

If they don’t have a valid mock system (which, unfortunately happens), you may have to build a mock internally using what you can learn about the 3rd-party system.

If you don’t currently have tests or mocks in place, one way to start is to use a “wire-tap” or proxy to log messages, requests and responses in order to build the test cases.

End-to-End Integration Testing

The necessity of testing an entire enterprise application can be significantly reduced by doing as much testing at the atomic level as possible.  By testing the application-to-EAI interfaces and the intra-EAI processing through EAI-specific tests, the need for end-to-end testing is, theoretically, eliminated.  If the tests cover all the corner cases, all the EAI processes and interfaces pass the tests, then the EAI is working properly.  At each interface, if all tests that invoke the mock services pass, then the applications work properly with the interfaces.  End-to-end testing, at that point, isn’t necessary.

However, there are commonly reasons for doing automated end-to-end tests just to “be absolutely sure” that data is flowing from one endpoint application to another.  In that case, it requires the collaboration of the application teams in order to build the tests and observers for the full end-to-end tests.


These are a few of the many open-source applications that can be used to help you develop the tests and mocks.


  1. jUnit (or a member of the xUnit family)
  2. Depending on the EAI implementation, some IDEs (such as Netbeans w/ OpenESB) have testing integrated into the system.


  1. jMock
  2. Apache Camel mocking
  3. SoapUI (web service mocking)


  1. Someone has to “own” the specification for each interface.  Personally, I believe that the specification should be owned by each application team, in collaboration with the EAI team.  However, due to the complexity of the interface or the size of the implementation, it may be better to have the EAI team own the specification, or perhaps put it in the hands of a higher-level enterprise data architect.
  2. Someone has to develop the mock of the EAI.  A collaborative effort between the application team and the EAI team usually work best to build the interface specification, and then the mock is maintained by the EAI team so that it is always up-to-date with the production and development EAI implementations.
  3. Someone has to develop the tests for the EAI.  Again, a collaborative effort to build the specification is needed, and then the tests are maintained by the EAI team so that it is always up-to-date.
  4. Someone has to develop the mock of the applications.  A cross-team collaborative effort is needed, and then the mock is maintained by the application team so that it is always up-to-date.
  5. Someone has to develop the tests for the applications.  The same application-EAI collaborative efforts is needed to build the specification, and then the tests are maintained by the application team so that it is always up-to-date.


Testing and mocking for EAI implementations allows each team to stay independent by testing their applications against an interface and catching problems before they go to production.  This decreases the cost of developing and maintaining enterprise architectures by reducing the interdependence between each development team, as well as by reducing the potential for regressions.

In one case that Matt talked about, a company he worked with was planning an upgrade to their EAI system, which ordinarily is a very expensive process.  Matt’s company had created all the tests and mocks for their old EAI system, so during the upgrade, they ran all their tests and mocks against the new EAI and they were able to immediately find the regressions and fix them.  This decreased the development and implementation time by over 50-fold.

Testing of EAI implementation is becoming more main-stream, but does involve some investment up front.  That investment, however, will pay handsome dividends when you upgrade any of your systems in the enterprise and you need to retest to make sure that it works properly.  Automated testing, built once, can save you weeks of hand-testing during each upgrade.

Thanks again to Matt VanVleet for his presentation and to the members of the EAI Alliance for attending.  If you are in the Columbus, OH area and are interested in EAI at any level, from programmer through executive, be sure to sign up (it’s free), and take part in our meetings.  http://www.meetup.com/Enterprise-Application-Integration

March 24, 2010

COHAA Meeting: Exploiting Agile for a Large Integration Project

If you’re in the Columbus, Ohio area and interested in Agile Development or Enterprise Application Integration, be sure to check out this event!


If you want to find more events like this, make sure you join the Enterprise Application Integration Alliance at Meetup.com!  Membership is free!

At one time or another, tired from a long day of work, we have all attended an Agile presentation that we were really excited about — only to have our excitement quickly fade when the presenter opened by explaining what an iteration was, leaving us to wonder if it would be rude to walk out.

I can’t guarantee it won’t happen again, but I can guarantee it won’t happen this Thursday (3/25/10). For the first time COHAA is putting together a presentation geared towards the Intermediate to Advanced Agilist. If you are interested in having Agile events beyond Agile 101 here in Columbus, please do your part by joining us this Thursday, and forwarding this to any of your fellow Agilists. As usual, the event is free, and dinner will be provided.

*Bonus: Rubber Chickens will be provide for anyone who asks questions such as, “What is a backlog?”

RSVP at:

Please join the Central Ohio Agile Association as Kim Berry, PMP, a Senior Project Manager at Fiserv, presents a case study in the successful use of Agile in a large integration program that had geographically dispersed teams.

Who Should Come: This presentation is targeted at an Intermediate to Advanced Agile enthusiast.

Date and Time:
Thursday, March 25, 2010 (free)
6:00 – 6:30 PM Food/ Networking; 6:30 – 8:00 PM Speaker

CareWorks Technology
5555 Glendon Ct.
Dublin, OH 43016

Re-certification PDU’s: PMP 1; CBAP 1;

Special Thanks to our food sponsor Pillar Technology. Please RSVP at www.cohaa.org.

SPEAKER: Kim Berry, PMP, is a Senior Project Manager with Fiserv, managing one of the largest cross-business unit endeavors to deploy a mobile banking solution. While at Fiserv, she became an early adopter of RUP (Rational Unified Process) for her business unit. Over the last 2 years, she has worked to integrate agile techniques within a RUP framework. She started her Project Management journey in 2001 and has remained in IT for 20 years, with a majority of it in the Business Intelligence field. Kim attained her PMI certification in 2008 and remains an active member of PMI. She is also a Six Sigma Yellow Belt, and has received accolades for organizing the resource and portfolio needs for the Enterprise Data Warehouse team. In her spare time, she is an Assistant Scoutmaster for the local Boy Scout Troup and recently received a district-level leadership award.

RSVP at:

Technorati Tags: , , , , ,

March 21, 2010

Columbus OH – Summer Science/Tech Events for Kids

Filed under: Uncategorized — Kris @ 6:02 pm

If you live in Central Ohio and are looking for tech or science events for you kids during the summer, be sure to check out these events from my friends at TechLife Columbus!

February 11, 2010

Upcoming Meetings for Agileists

Filed under: Uncategorized — Tags: , , , — Kris @ 10:06 am

We had a small, but very fun, knowledgeable and interesting group for the kickoff of the Enterprise Application Integration Alliance! We had some great discussions on the issues of integrating and improving legacy applications, scaling message busses, successes and failures in data architecture, and inheriting messaging systems that are being used in ways that were never intended.

We’re working on our next meeting, so stay tuned.

In the mean time…

Make sure you check out these other two meetings that I’ll be at, and I thought might be interesting to you as well!

The Agile Enterprise (A TechLife Columbus meetup)

February 11th, 2010
4:00 PM – 7:00 PM
(Sorry for the late notice.)

Price: FREE

Dublin Entrepreneurial Center
7003 Post Rd.
Dublin, OH 43016

Click for more information & to RSVP for The Agile Enterprise

This will be a recurring meetup series, oriented towards large organizations and growing organizations. Everyone is welcome.

These meetups will follow the OpenSpace/ Unconference format – meaning, come prepared to participate! There will be no presentations nor handing-down of platitudes! Participants will share their experiences, challenges and successes. Everyone’s input is valuable; Everyone has something to learn and something to teach.

Topics will rotate and be chosen by the participants, to discuss issues such as:

* How can a growing organization stay agile, yet implement the discipline necessary to scale and survive?
* Innovation and agility of a startup, within a large enterprise
* How much process?
* Making cultural changes
* Moving the needle when you’re in the trenches
* Getting support from employees when you’re making changes from the top
* Knowing what needles to move and knowing when you’ve made a difference

This month, we will attempt to focus on practicing agile development methodologies in a large organization. What methods do not scale and how do you adjust?

COHAA Day In The Life Tour: Progressive Medical

February 25th, 2010
7:30 AM – 9:00 AM

Price: FREE, but limited to 15 people. You must RSVP here.

Progressive Medical
250 Progressive Way,
Westerville, OH 43082

Click for more information & to RSVP for the COHAA Day In The Life Tour: Progressive Medical

Host: Ben Blanquera, VP Information Services

Due to limited space, RSVPs are limited to 15 people. Please only RSVP if you are attending.

Please join COHAA as we take a tour of Progressive Medical’s Agile space. Come learn ‘Why Progressive Chose Agile’.

Ben Blanquera is the Vice President of Information Services for Progressive Medical, Inc . In his role, Ben is responsible for project/portfolio management, application development, business intelligence, and business analysis.

Progressive Medical, Inc. is a nationwide, managed care and health care cost containment company. It coordinates care for workers’ compensation, auto-no-fault and personal injury protection cases. Progressive Medical is a Inc. 500 Hall of Fame company.

I hope to see you there or at the next EAIA meeting!

Stay warm!

Technorati Tags: , , ,

February 9, 2010

Refactoring – A Haiku

Filed under: Uncategorized — Tags: — Kris @ 10:31 am

Refactor today
Write, test, run; Some fail, some pass
Mind gone to jelly

Technorati Tags:

Older Posts »

Powered by WordPress