Log in
R. Kris Hardy Photo

R. Kris Hardy

June 26, 2013

Fedora: Boot fails with “unit basic.target may be requested by dependency only”

Filed under: System Administration — Tags: , , , , , — Kris @ 10:55 pm

I recently applied updates using yum update on my Fedora 18 workstation, and my system hung during the next boot with the message “unit basic.target may be requested by dependency only.” I was unable to enter debug mode or boot into single user mode.

The problem was that my grub2.cfg file was rebuilt during a kernel upgrade, and it grabbed the wrong kernel as the default. I had upgraded my computer about a month ago from Fedora 17 to 18 using fedup, and fedup had left behind the /boot/vmlinuz-fedup and /boot/initramfs-fedup.img files. When grub2.cfg was rebuilt today (presumably by calling grub2-mkconfig -o /boot/grub2/grub.cfg), the first kernel that it found was /boot/vmlinuz-fedup. This failed during boot because this kernel bootstraps an fedup upgrade using files that have been previously downloaded. A file it was expecting could not be found, so the kernel halted.

The Fix

1. When booting, select a different kernel in the grub2 menu.

2. If it boots, log in as the root user. If it doesn’t boot, reboot and try a different kernel, or boot from a boot disk.

3. Delete the /boot/vmlinuz-fedup and /boot/initramfs-fedup.img files.

4. Run grub2-mkconfig -o /boot/grub2/grub.cfg

(If you have a UEFI system, you will probably have to run grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg instead, or symlink /boot/efi/EFI/fedora/grub.cfg to /boot/grub2/grub.cfg)

You should see a list of kernels that grub2-mkconfig found, and vmlinuz-fedup should NOT be in that list.

See https://fedoraproject.org/wiki/GRUB_2 for some ideas for dealing with other grub2 issues you might run into.

5. Reboot and see if that fixes your problem.

Technorati Tags: , , , , ,

June 8, 2013

Amanda backup configuration update causes heavy disk IO by sendmail on Fedora 18

Filed under: System Administration — Tags: , , , , , , , — Kris @ 8:33 pm

I recently migrated my workstation from Fedora 17 to Fedora 18, and so far have been happy. However, I noticed that my disk started spinning like mad and things seemed really sluggish at times. It has also getting progressively worse over the last week, so I finally decided that I needed to stop wrapping my “blanky” around my head and telling myself that everything was OK, and actually get to the bottom of the issue.

So, first things first…

I checked top and found that sendmail was consuming about 4% of CPU time. Not huge, but definitely abnormal. The big giveaway though was that my IO Wait was 85%! I’m amazed that I was able to do anything at all!

Next stop was a look at iotop, which showed that I had two things that were consuming the disk time: tracker and sendmail.

I decided to take care of tracker first, which I disabled once and for all by disabling all tracker components in gnome-session-properties, followed by:


tracker-control --hard-reset --remove-config

(Thanks to dd_wizard at FedoraForum.org for the tip on tracker-control)

I had previously disabled tracker, but I didn’t realize that I also had to wipe out the tracker config files in ~/.config/tracker.

Now that tracker was disabled, I checked iotop again and found that [jdb2/sda4-8] was the culprit. This is the ext4 journaling process, so I started doing some trial and error to determine what was causing the large number of disk writes that would cause such a high journaling load. Seeing that sendmail was at 4% CPU utilization, I killed it using service sendmail stop and the [jdb2/sda4-8] dropped to practically 0% IO utilization, and the % io wait went to approximately 0%.

Now that things were under control, I restarted sendmail using service sendmail start, and the IO wait problems immediately came back. Sendmail was doing some heavy disk IO, so I killed it again and decided to dig into the mail queue.

I used the command sendmail -bp and found that I had 25,000 messages in the mail queue. What the heck? Did I get spam malware on my computer somehow, or is sendmail being used as a relay somehow? I checked the messages, and found that they all were either emails to admin1@ or admin2@, or rejection or delay notices stemming from these emails. All the emails to admin1 and admin2 contained details about an Amanda backup server that I run on my workstation. What the heck was happening?

I took a look at the amanda.conf file in my backup set, and found a new section near the bottom of the file from the last update to amanda-server.x86_64 version 3.3.2-3.fc18.


define interactivity inter_email {
plugin "email"
property "mailto" "admin1" "admin2"
property "resend-delay" "10"
property "check-file" "/tmp/email_input"
property "check-file-delay" "10"
}
define interactivity inter_tty_email {
plugin "tty_email"
property "mailto" "admin1" "admin2"
property "resend-delay" "10"
property "check-file" "/tmp/email_input"
property "check-file-delay" "10"
}

This turned me onto the man page about amanda-interactivity. (It’s also available via man 7 amanda-interactivity

Based on that configuration, my computer was sending an email to both admin1@ and admin2@ every 10 seconds to let me know that no acceptable volumes were found in my chg-disk virtual tape changer. Since there were no admin1 or admin2 addresses to accept the email, sendmail got a rejection from GMail (I’m using Google Apps), which it then would attempt to send to the amandabackup user, which I had aliased to my email address. Based on the volume of emails, rejections, and deferral notices, GMail began to throttle the number of messages that they would accept from my IP address because they thought that I was sending spam. This caused the sendmail queue to continue to grow to over 100M, which I assume causes a significant processing burden for sendmail as it continuously attemps to send and update the queue. This apparently brought the entire computer to its knees.

I ultimately fixed this by modifying amanda.conf, setting mail-to to my email address, and resend-delay to 0 to allow it to only send 1 message to me.

I then cleared the sendmail queue using the following commands as root:


# cd /var/spool/mqueue
# grep -l "No acceptable" * | xargs -I {} rm {}
# grep -l "admin1" * | xargs -I {} rm {}
# grep -l "admin2" * | xargs -I {} rm {}
# grep -l "amandabackup" * | xargs -I {} rm {}

While I might have lost a few emails from amandabackup (my amanda user), that was OK.

I then started sendmail using service sendmail start and forced it to flush the cache using sendmail -q -v and watched for any errors.

A final check with iotop and top showed that everything was back to normal. Whew…

Technorati Tags: , , , , , , ,

May 8, 2012

Building pywin32-217 from source

Filed under: Development — Tags: , , — Kris @ 9:57 pm

Below are the steps that I use in order to build pywin32-217 on Windows 7:

  1. Download the latest pywin32 code from Sourceforge using Mercurial:

    hg clone http://pywin32.hg.sourceforge.net:8000/hgroot/pywin32/pywin32

  2. Install MS Visual Studio 2008 Standard. The Express edition won’t work because the afxres.h header is missing. I’ve tried some other people’s suggestions, but nothing I found works reliably.

  3. Download the Windows 7 SDK with .NET Framework 4, Version 7.1

  4. Download the DirectX SDK

  5. I haven’t gotten this to work yet, nor do I use Exchange… Download the Exchange Server 2000 SDK (The only link I could find was on CNet)

    Also, here’s the link the Microsoft’s Exchange 2000 SDK Developer Tools. I’m not yet sure if this is needed.

  6. To build pywin32 using Microsoft Visual C++ 2008 Standard Edition, do the following on the command prompt inside the pywin32 directory:


    set MSSdk=c:\Program Files\Microsoft SDKs\Windows\v7.1
    set PATH=%PATH%;%MSSdk%\Bin
    python setup.py build -c msvc
    python setup.py install --skip-build

Technorati Tags: , ,

July 2, 2011

Getting Started with Git and SVN

Filed under: Articles,Development — Tags: — Kris @ 9:32 am

Getting Started with Git and SVN

What is Git, and why would you want to use it?

Git is a distributed version control system that was written by Linus Torvalds (the developer of the Linux kernel). “Distributed” means that the entire revision history is stored on your local machine, which allows you to manage commits, rollback changes, merge branches, tag, etc., all without having to maintain a constant connection to the central svn server.

This also allows you to use very non-linear development methods, and incrementally commit changes and publish those changes on SVN in one batch once you know that everything that you’re working on works.

The other really nice part of git is that you can branch your code in-place on your local machine, meaning that you can branch the trunk and make multiple branches (and even branches of branches) to try things out and see what works best. Once you have your changes in place, you can merge the branches back together and commit them to svn.

I’ve has been using this for my development starting in May 2011, and I’ve been really happy with it, although it takes a little time to get used to. This page is working document of some of the tricks, lessons, and best practices that I’ve picked up as I’ve started to use git. (BTW, I’m so happy with git that all my personal development projects have been entirely migrated to git both on the workstation and my server).

If you have any questions, make sure to post a message below. I continue to add to this tutorial so that I have a central place to store my working knowledge of git svn.

Thanks!

Introductory Material (or reference material if you get stuck)

To start with, here is a cheatsheet of git commands, and how they relate to svn. Now, git can do a LOT more than this, but it will help you get started with understanding how git works.

For a more lengthy introduction to git, read the Git Tutorial and then the Everyday GIT with 20 commands or so page.

Typical Workflow with Git and SVN

Before you start this tutorial, if you want to also try committing to a svn server, send me an e-mail at kris@rkrishardy.com, and give me your e-mail address so that I can give you commit access. (I just ask that you be considerate with what you commit, and don’t commit anything that is not tasteful).

To get started, first, download and install Git. If you are on Windows, download Git from http://git-scm.com. If you are on a Unix/Linux variety, use your repository or package manager.

On Fedora/RedHat/CentOS:

$ sudo yum install git git-svn

On Debian/Ubuntu:

$ sudo apt-get install git-core git-svn

For FreeBSD, check out the instructions for installing packages and ports, depending on your preference.

Now, let’s use the git-svn-sandbox project at code.google.com as an example to show you how this works.

The directory that you want to clone from svn is the one with the trunk/tags/branches directories inside of it. For example, let’s clone git-svn-sandbox.


$ git svn clone https://git-svn-sandbox.googlecode.com/svn/ git-svn-sandbox -Ttrunk -bbranches -ttags --username your@emailaddress.com
This may take a while on large repositories
Checked through r...
...
Checked out HEAD:
http://code.google.com/p/git-svn-sandbox r...

This will create a git-svn-sandbox folder and clone the entire svn history into the git-svn-sandbox/.git folder. This may take some time because it needs to walk the entire svn revision tree, so grab some tea or coffee and give it a few minutes.

Once the cloning is done, you should have a local ‘master’ branch in place:


$ cd git-svn-sandbox
$ git branch
* master

The ‘*’ shows you which branch you currently have checked out. You should currently be in the “master” branch.

Now, see what remote tracking branches are available (these are on the SVN server):


$ git branch -r
tags/1.0
test
trunk

git branch is the command to use for managing local branches. Using git branch, you can create, delete, merge, etc. your local branches. (These are isolated from the SVN remote branches, and branches you create here are not automatically uploaded to the SVN server, so create and delete branches without fear).

SVN doesn’t handle conflicts smoothly, so here’s what I do. Before I start changing anything, I first create a new local branch and start working on that. If you forget to branch, that’s OK. If you get a conflict during the git svn rebase or git svn dcommit, see the “I did an git svn rebase, but I got a bunch of merge errors. What should I do?” section below.


$ git checkout -b my_branch
Switched to a new branch 'my_branch'

You have created a new branch and switched to it. This command was
shorthand for:


$ git branch my_branch
$ git checkout my_branch

Now, and and/or edit the files you need to. When you are done, take a
look at the list of files that changed.


$ git status
# On branch my_branch
# Changes not staged for commit:
# (use "git add ..." to update what will be committed)
# (use "git checkout -- ..." to discard changes in working
directory)
#
# modified: test.txt
#
# Untracked files:
# (use "git add ..." to include in what will be committed)
#
# abcd.123
no changes added to commit (use "git add" and/or "git commit -a")

Here, text.txt is already in the git index and has been modified but not flagged to be committed. Also, abcd.123 has been added, but is not in the git index.

To add both to the git index, use git add -A (which adds all untracked and modified files to the index).

$ git add -A

Now you can commit the files to the git branch using git commit.


$ git commit
[my_branch db1292d] Saving changes
2 files changed, 3 insertions(+), 0 deletions(-)
create mode 100644 abcd.123

When you are done developing and you want to share your code on the svn server, make sure that you have all the files checked into your development branch.


$ git status

Add and commit any changes that show up, and then checkout the master branch.


$ git checkout master
Switched to branch 'master'

If you didn’t remember to do your development in a different branch, and you made modifications in master, follow the instructions in “I did an git svn rebase, but I got a bunch of merge errors. What should I do?” if you run into any problems with the next set of steps.

Rebase your master branch with the svn server.


$ git svn rebase
...
Current branch master is up to date.

If any files had been committed to svn by other developers, you should see the changes applied in the output of git svn rebase.

Merge in the changes that you want to commit to svn.


$ git merge temp
Updating 3877696..f68e10c
Fast-forward
test.txt | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 test.txt

If you see any conflicts in the output, you can find the conflicts by using git status.

$ git status

If you want to diff the conflicts, use git diff.

$ git diff

Once the conflicts have been resolved, make sure that all the changes have been committed to master. Then commit the changes to svn using git svn dcommit.


$ git commit
$ git svn dcommit --username your@emailaddress.com
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
A test.txt
Committed r4
A test.txt
r4 = 4c55148ecefac8b83b626fa568b2bc1f99ea990e (refs/remotes/trunk)
No changes between current HEAD and refs/remotes/trunk
Resetting to the latest refs/remotes/trunk

If git svn dcommit failed with a message stating “Authorization failed: …”, I need to give you commit access to the repository. Please send me an e-mail at kris@rkrishardy.com, and give me your e-mail address so that I can give you commit access. (I just ask that you be considerate with what you commit, and don’t commit anything that is not tasteful).

If you got the “Committed r…” line, congratulations! Your changes have been published to the svn server!

If you want to continue working on your local development branch, check it back out and rebase it to bring it up to date with the master.


$ git checkout my_branch
$ git rebase master
First, rewinding head to replay your work on top of it...

When you are completely done with your branch, you can delete it by checking out a different branch and using git branch -d.


$ git checkout master
$ git branch -d my_branch
Deleted branch my_branch (was 15971cb).

Congratulations. You now have the basic workflow down. If you run into problems, checkout out the “Troubleshooting” section below, or take a look at the Git documentation at Git SVN Cheatsheet, the
Git Tutorial, the Everyday GIT with 20 commands or so page, or the extensive Git Users Manual.

Troubleshooting

I did a git svn rebase, but I got a bunch of merge errors. What should I do?

Before you do anything, revert the rebase by typing:


$ git rebase --abort

Now, make a backup of your current master by branching it.


$ git branch temp

Use git log to figure out which commit was the last one that was sync’d with svn. Here is an example of the output:


$ git log
commit 4ebf8122c5b18ab16167268b1791fa49996e56cc
Author: Kris Hardy
Date: Sat Jul 2 10:01:55 2011 -0400

Modified test.txt

commit ec8a9e15a30c01c7ad7b83d6cb8cde39e1c6a650
Author: hardyrk@gmail.com
Date: Sat Jul 2 13:58:02 2011 +0000

 

Adding initial files

git-svn-id: https://git-svn-sandbox.googlecode.com/svn/trunk@2 fb457a7e-32b5

 

commit 96acefc8a443a02568453c7a504064ffc6d8428e
Author: (no author) <(no author)@fb457a7e-32b5-5db0-b168-d315aa6739ac>
Date: Sat Jul 2 13:12:00 2011 +0000

Initial directory structure.

 

 

 

 

git-svn-id: https://git-svn-sandbox.googlecode.com/svn/trunk@1 fb457a7e-32b5

Look for the latest git-svn-id to find the latest commit. In this example, it
was commit ec8a9e15a30c01c7ad7b83d6cb8cde39e1c6a650.

You only have to use the first few characters of the commit id, so we’ll
use ec8a9 as a short-hand for the full commit string.

Now, reset the master branch to the last version that was sync’d with
SVN. NOTE: This command will erase commits that occurred after the
commit, so make sure you backed up your master branch (using git branch)!!!.


$ git reset --hard ec8a9
HEAD is now at ec8a9

Now, you are safe to rebase master with the svn trunk using git svn rebase.

$ git svn rebase
...
Current branch master is up to date.

If any files had been committed to svn by other developers, you should see the changes applied in the output of git svn rebase.

Merge in the changes that you want to commit to svn.


$ git merge temp
Updating ec8a9..4ebf8
Fast-forward
test.txt | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 test.txt

If you see any conflicts in the output, you can find the conflicts by using git status.


$ git status

If you want to diff the conflicts, use git diff.


$ git diff

Once the conflicts have been resolved, make sure that all the changes have been committed to master. Then commit the changes to svn using git svn dcommit.


$ git commit
$ git svn dcommit --username your@emailaddress.com
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
A test.txt
Committed r4
A test.txt
r4 = 4c55148ecefac8b83b626fa568b2bc1f99ea990e (refs/remotes/trunk)
No changes between current HEAD and refs/remotes/trunk
Resetting to the latest refs/remotes/trunk

I checked out an svn branch (or tag), and how svn dcommit in master pushes the changes to the svn branch!

When checking out a remote branch or tag, master sometimes stops tracking the svn trunk, and instead starts tracking the svn branch or tag. To fix this:

  1. Check the git master branch back out
  2. Back up any changes
  3. Do a hard reset, setting the svn trunk as the remote tracking branch
  4. Reapply the backed up changes


$ git checkout master
$ git branch temp
$ git reset --hard remotes/trunk
HEAD is now at ...
$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
$ git merge temp
Updating ...
$ git branch -d temp
Deleted branch temp (was ...
$ git svn dcommit
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk
...

Adapted from StackOverflow: How do I make git-svn use a particular svn branch as the remote repository?

Stupid Git-SVN tricks

I want to ignore files from git (just like setting ‘svn:ignore’)

To ignore files from git, add the filename or filename pattern to a .gitignore file in the folder that the file is in, or in one of the folder’s ancestors if it is a global ignore (such as ignoring *.o or *~ files, for example).


$ touch .gitignore
$ echo *.o >> .gitignore
$ git add .gitignore
$ git commit -m "Ignoring *.o files"

How do I create a tag in SVN?

Use git svn tag.


$ git svn tag 1.0
Copying https://git-svn-sandbox.googlecode.com/svn/trunk at r4 to https://git-svn-sandbox.googlecode.com/svn/tags/1.0...
Found possible branch point: https://git-svn-sandbox.googlecode.com/svn/trunk => https://git-svn-sandbox.googlecode.com/svn/tags/1.0, 4
Found branch parent: (refs/remotes/tags/1.0) a029adfba661303ad73be48fcbb92372fba9a9f1
Following parent with do_switch
Successfully followed parent
r5 = 974647bc0548d3d21c67d7f6fe2904da2f02a846 (refs/remotes/tags/1.0)
$ git branch -r
tags/1.0
trunk

If you want to check to make sure that master is still tracking the svn trunk, use git svn dcommit -n.


$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...

How do I create a branch in SVN?

Use git svn branch.


$ git svn branch temp
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...
$ git svn branch temp
Copying https://git-svn-sandbox.googlecode.com/svn/trunk at r4 to https://git-svn-sandbox.googlecode.com/svn/branches/temp...
Found possible branch point: https://git-svn-sandbox.googlecode.com/svn/trunk => https://git-svn-sandbox.googlecode.com/svn/branches/temp, 4
Found branch parent: (refs/remotes/temp) a029adfba661303ad73be48fcbb92372fba9a9f1
Following parent with do_switch
Successfully followed parent
r6 = 4c0195271d0d586f2facd2707a83653844fec3a2 (refs/remotes/temp)
$ git branch -r
tags/1.0
temp
trunk

If you want to check to make sure that master is still tracking the svn trunk, use git svn dcommit -n.


$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/trunk ...

How do I checkout/track a remote branch or tag from SVN, and manage the files using Git?

You can create a git branch, and have it track the remote svn branch using git branch. The git branch will now be synchronized with the svn tag or branch.


$ git branch local/tags/1.0 tags/1.0
$ git branch
local/tags/1.0
* master
$ git branch -r
tags/1.0
temp
trunk
$ git checkout local/tags/1.0
Switched to branch 'local/tags/1.0'
$ git svn dcommit -n
Committing to https://git-svn-sandbox.googlecode.com/svn/tags/1.0 ...

My branches are forks of the svn trunk. How can I keep the forks up to date without the conflicts caused by git merge?

See this page for a great walkthrough (hint, use git rebase –onto):

Maintaining a Fork With Git

Technorati Tags:

April 14, 2010

Testing an Enterprise Application Integration (EAI) Implementation

Adapted from the following presentation:

“Testing an EAI Implementation”

Matt VanVleet
VP Product Development and Practice Management
Pillar Technology

Enterprise Application Integration Alliance
Columbus, Ohio
http://www.meetup.com/Enterprise-Application-Integration
4/8/2010

Before I begin, I want to give my thanks to Matt VanVleet of Pillar Technology for giving us his time, knowledge and experience for his presentation.  Most of the following article is directly from, or inspired by, what he shared.

I also want to thank the members of the Enterprise Application Integration Alliance for attending.  Without them, valuable presentations like Matt’s would not be possible.

Now to the topic at hand…

If you look at the architecture of a standard Enterprise Application Integration (EAI) implementation, it typically consists of a series of independent applications or systems that are loosely-coupled to one another through a central integration system (such as an Enterprise Service Bus).

EAI projects usually consist of 4 phases, each of which create unique challenges to the enterprise:

  1. Implementation
  2. Upgrade the EAI
  3. Upgrade the Back-end Systems, Data Warehouse or Applications
  4. Upgrade a 3rd Party Integration

1. Implementation

When EAI projects are started, the EAI team typically works closely with the teams that are in charge of each application that needs to be integrated.  Each team needs to cooperate with the development, documentation, and testing.  They also need to coordinate releases so that the integration does not break if some critical component or feature of the application or the EAI system itself is being upgraded.

Since so many teams are involved and the implementations are complex, the cost of implementing an EAI system are high.

In addition, since releases need to by sync’d across the enterprise, it slows down the speed of development and adds at least one additional layer of management.  Furthermore, while each endpoint is loosely-coupled to one another, it is tightly coupled to the EAI system itself.  Each EAI-to-endpoint integration creates a potential point of failure, so each change needs to be thoroughly tested before being deployed.

2. Upgrade the EAI

Once an EAI implementation is in place, upgrading the EAI can be as time consuming and expensive as developing the EAI in the first place.  Each team has to brought back together the plan, develop and synchronize the upgrade.

3. Upgrade the Back-end Systems, Data Warehouse or Applications

Some of the teams will be needed for this upgrade, including the EAI team and the team in charge of the application being upgraded.

4. Upgrade a 3rd Party Integration

The EAI team will be very involved in this, and it may require the coordination and help of the 3rd party service provider that you are integrating with.

Issues

Since each of these systems is now tightly-coupled to the EAI system, upgrades at any point along the enterprise become major events.  One slight change, unless properly planned, managed and tested, can have a significant impact at some other end of the enterprise.

As the enterprise generates more and more data, the EAI implementation is put under more and more load.  What could be simple tweaks to the persistance layer or server configuration now involve much more risk.

Also, once an EAI implementation is in place, it never gets the attention that the applications themselves get.  Since EAI is behind the scenes, it tends to be an afterthought when it comes to the corporate budget.  It’s necessary, but it’s not “sexy”.

What is the best way to manage this?  How can you ensure that your EAI system is stable, while allowing the teams responsible for each application to be as autonomous as possible?  How can we decrease the upgrade cost of an EAI and also make it flexible enough to now slow down the application developers?

Solution

If you look at an EAI implementation as a central data system and don’t worry about how it works on the inside (which is a reasonable assumption when you are just dealing with interfaces), you can simplify it as a black box connected to a series of interfaces: Application-to-EAI, Data Warehouse-to-EAI, Web Service-to-EAI, etc.

At each interface, you can then divide the interface into two halves: 1) Application-side and 2) EAI-side

If you are an application developer, you are developing your application to work with the interface that the EAI implementers expose.  It could be a database socket, a web service, flat files, messaging, etc.  This is essentially the same for the Database Warehouse and Back-end systems (such as SAP).

At each interface, there is typically inbound and outbound traffic.  Some of the traffic will be the result of the application itself (such as adding a customer to a CRM which is then forwarded to the ESB), some of the traffic will be responses to data sent by the application (a reply from the ESB), and some of the traffic will sent without any aparent trigger (the ESB sends a message to the application, caused by new data from another application).

To ensure that your EAI implementation and the applications are communicating properly, you can put tests in place to make sure that the systems respond properly.  You can also develop simulated systems, also known as mocks, that act like the system that is at the other end of the interface.

These tests and mocks can help you during any stage in the lifecycle of your EAI implementation:

  1. During development – Test your applications and ESB, independently, to make sure that they respond to events in the right way.
  2. During production – Regularly test your applications and ESB to identify any mismatches or problems immediately.  With up-to-date tests, it is even possible to detect defects in the system that were somehow missed by a development team.  You can then respond proactively and disconnect the communication to the bad system, for example, to keep the other systems across the enterprise running properly.

Now, let’s use the internal application-to-EAI implementation interface as an example.

Application-to-EAI Testing

When you are developing an EAI or application to be compliant with the Application-to-EAI interface, there are 4 test assets that need to be developed in order to assure that both the application and EAI system are working properly.

  1. Mock of EAI Implementation – Used to test the application against an EAI system that responds correctly to requests and generates application-bound traffic.
  2. Test of Application interface – Used to test the application to make sure that it responds properly traffic from the EAI implementation.
  3. Mock of Application – Used to test the EAI system against a fake application that responds correctly to requests and generates EAI-bound traffic.
  4. Test of EAI interface – Used to test the EAI system to make sure that it responds properly to traffic from the application.

3rd Party-to-EAI Testing

If you are integrating with a 3rd party, the layout is the same.  However, there may be a larger barrier between the EAI team and the 3rd party team.  In that case, you probably will have to assume that their testing is in place, or work with them to ensure that they have their tests and mocks in place.

If they don’t have a valid mock system (which, unfortunately happens), you may have to build a mock internally using what you can learn about the 3rd-party system.

If you don’t currently have tests or mocks in place, one way to start is to use a “wire-tap” or proxy to log messages, requests and responses in order to build the test cases.

End-to-End Integration Testing

The necessity of testing an entire enterprise application can be significantly reduced by doing as much testing at the atomic level as possible.  By testing the application-to-EAI interfaces and the intra-EAI processing through EAI-specific tests, the need for end-to-end testing is, theoretically, eliminated.  If the tests cover all the corner cases, all the EAI processes and interfaces pass the tests, then the EAI is working properly.  At each interface, if all tests that invoke the mock services pass, then the applications work properly with the interfaces.  End-to-end testing, at that point, isn’t necessary.

However, there are commonly reasons for doing automated end-to-end tests just to “be absolutely sure” that data is flowing from one endpoint application to another.  In that case, it requires the collaboration of the application teams in order to build the tests and observers for the full end-to-end tests.

Technology

These are a few of the many open-source applications that can be used to help you develop the tests and mocks.

Tests

  1. jUnit (or a member of the xUnit family)
  2. Depending on the EAI implementation, some IDEs (such as Netbeans w/ OpenESB) have testing integrated into the system.

Mocks

  1. jMock
  2. Apache Camel mocking
  3. SoapUI (web service mocking)

Considerations

  1. Someone has to “own” the specification for each interface.  Personally, I believe that the specification should be owned by each application team, in collaboration with the EAI team.  However, due to the complexity of the interface or the size of the implementation, it may be better to have the EAI team own the specification, or perhaps put it in the hands of a higher-level enterprise data architect.
  2. Someone has to develop the mock of the EAI.  A collaborative effort between the application team and the EAI team usually work best to build the interface specification, and then the mock is maintained by the EAI team so that it is always up-to-date with the production and development EAI implementations.
  3. Someone has to develop the tests for the EAI.  Again, a collaborative effort to build the specification is needed, and then the tests are maintained by the EAI team so that it is always up-to-date.
  4. Someone has to develop the mock of the applications.  A cross-team collaborative effort is needed, and then the mock is maintained by the application team so that it is always up-to-date.
  5. Someone has to develop the tests for the applications.  The same application-EAI collaborative efforts is needed to build the specification, and then the tests are maintained by the application team so that it is always up-to-date.

Summary

Testing and mocking for EAI implementations allows each team to stay independent by testing their applications against an interface and catching problems before they go to production.  This decreases the cost of developing and maintaining enterprise architectures by reducing the interdependence between each development team, as well as by reducing the potential for regressions.

In one case that Matt talked about, a company he worked with was planning an upgrade to their EAI system, which ordinarily is a very expensive process.  Matt’s company had created all the tests and mocks for their old EAI system, so during the upgrade, they ran all their tests and mocks against the new EAI and they were able to immediately find the regressions and fix them.  This decreased the development and implementation time by over 50-fold.

Testing of EAI implementation is becoming more main-stream, but does involve some investment up front.  That investment, however, will pay handsome dividends when you upgrade any of your systems in the enterprise and you need to retest to make sure that it works properly.  Automated testing, built once, can save you weeks of hand-testing during each upgrade.

Thanks again to Matt VanVleet for his presentation and to the members of the EAI Alliance for attending.  If you are in the Columbus, OH area and are interested in EAI at any level, from programmer through executive, be sure to sign up (it’s free), and take part in our meetings.  http://www.meetup.com/Enterprise-Application-Integration

March 24, 2010

COHAA Meeting: Exploiting Agile for a Large Integration Project

If you’re in the Columbus, Ohio area and interested in Agile Development or Enterprise Application Integration, be sure to check out this event!

http://www.cohaa.org/content/?q=node/32

If you want to find more events like this, make sure you join the Enterprise Application Integration Alliance at Meetup.com!  Membership is free!

At one time or another, tired from a long day of work, we have all attended an Agile presentation that we were really excited about — only to have our excitement quickly fade when the presenter opened by explaining what an iteration was, leaving us to wonder if it would be rude to walk out.

I can’t guarantee it won’t happen again, but I can guarantee it won’t happen this Thursday (3/25/10). For the first time COHAA is putting together a presentation geared towards the Intermediate to Advanced Agilist. If you are interested in having Agile events beyond Agile 101 here in Columbus, please do your part by joining us this Thursday, and forwarding this to any of your fellow Agilists. As usual, the event is free, and dinner will be provided.

*Bonus: Rubber Chickens will be provide for anyone who asks questions such as, “What is a backlog?”

RSVP at:
http://www.cohaa.org/content/?q=node/32

Please join the Central Ohio Agile Association as Kim Berry, PMP, a Senior Project Manager at Fiserv, presents a case study in the successful use of Agile in a large integration program that had geographically dispersed teams.

Who Should Come: This presentation is targeted at an Intermediate to Advanced Agile enthusiast.

Date and Time:
Thursday, March 25, 2010 (free)
6:00 – 6:30 PM Food/ Networking; 6:30 – 8:00 PM Speaker

CareWorks Technology
5555 Glendon Ct.
Dublin, OH 43016

Re-certification PDU’s: PMP 1; CBAP 1;

Special Thanks to our food sponsor Pillar Technology. Please RSVP at www.cohaa.org.

SPEAKER: Kim Berry, PMP, is a Senior Project Manager with Fiserv, managing one of the largest cross-business unit endeavors to deploy a mobile banking solution. While at Fiserv, she became an early adopter of RUP (Rational Unified Process) for her business unit. Over the last 2 years, she has worked to integrate agile techniques within a RUP framework. She started her Project Management journey in 2001 and has remained in IT for 20 years, with a majority of it in the Business Intelligence field. Kim attained her PMI certification in 2008 and remains an active member of PMI. She is also a Six Sigma Yellow Belt, and has received accolades for organizing the resource and portfolio needs for the Enterprise Data Warehouse team. In her spare time, she is an Assistant Scoutmaster for the local Boy Scout Troup and recently received a district-level leadership award.

RSVP at:
http://www.cohaa.org/content/?q=node/32

Technorati Tags: , , , , ,

January 5, 2010

Terry Chay – 1500 Lines of Code

Filed under: Development — Tags: , , , , , — Kris @ 9:34 pm

Here is an outstanding article on web development philosophy that really got me thinking today, written by Terry Chay of WordPress.  It’s long, but well worth the read.

1500 Lines of Code

This really got me thinking, and I’ll probably work on a comment to it this week.

Technorati Tags: , , , , ,

December 12, 2009

Subversion Fix: svn copy causes “Repository moved permanently to ‘…’; please relocate

Filed under: Articles,Debugging — Tags: , , , — Kris @ 12:54 pm

Background

Subversion is a version control system. It can run as either it’s own server (svnserve), or as an Apache module (mod_dav_svn.so).

When using the mod_dav_svn module for Apache, and doing an svn copy operation on the repository itself can fail if the VirtualHost configuration for subversion is not correct. Put simply, if Apache itself and mod_dav_svn are serving content from the same path, then conflicts can occur. Apache can get confused if it attempts to serve a physical file instead of routing the request through mod_dav_svn.

“svn copy …” operations will fail, while “svn update …”, “svn commit …”, and “svn checkout …” operations work fine.

Detail of the problem, diagnosing the problem, and the fix are below.

(more… >>)

Technorati Tags: , , ,

December 10, 2009

Permission Denied (13) When Opening Socket in PHP & Apache

This post covers two cases that I’ve run into that cause Permission Denied (13) errors when opening sockets in PHP.

Situation #1:  SELinux denies httpd from opening socket

I ran into this simple, but annoying, problem after I migrated my development workstation to Fedora 12.

Problem:

A large PHP application that I have developed at Submerged Solutions (SandPiper Accounting) began throwing Permission Denied (13) system exceptions when attempting to send mail through Zend Framework’s Zend_Mail library.

All the phpunit unit tests worked fine and could send e-mail, but would fail when the usability tests started and any HTTP requests that sent e-mail were handled through Apache.

The Apache instance was being run as user apache / group apache, and php (mod_php) is run as user apache / group apache.

The exception occurred in Zend_Mail_Protocol_Abstract->_connect(), immediately following the socket opening call “stream_socket_client(…)”.

File: Zend/Mail/Protocol/Abstract.php; Line 224

50: abstract class Zend_Mail_Protocol_Abstract
51: {
...
218: protected function _connect($remote)
219: {
220: $errorNum = 0;
221: $errorStr = '';
222:
223: // open connection
224: $this->_socket = @stream_socket_client($remote, $errorNum, $errorStr, self::TIMEOUT_CONNECTION);
225: ...

fopen() calls using http and ftp protocols also failed:

Warning: fopen(…) [function.fopen]: failed to open stream: Permission denied in …

The fix:

The problem turned out to be the “httpd_can_network_connect” SELinux setting that is on by default in Fedora 12.

In a shell console, run as root:

# /usr/sbin/setsebool httpd_can_network_connect=1

Thanks to durwood, who pointed this out on PHP.net.

“Bug” Report at RedHat.com.

More info on SELinux.

Situation #2: PHP forbids opening socket to 255.255.255.255

A reader of this blog brought a problem of his to me.  I had never seen it before, so it was definitely interesting.

Problem:

This reader was using this PHP script that he had found to do Wake-on-LAN pings on his local network.  The script worked fine in Windows, but failed on his Fedora 10 server.  The error he received was Permission Denied (13).

His WoL packets were sent via udp to the broadcast address 255.255.255.255.  This worked fine in Windows, but failed in Linux.

His server’s PHP installation had socket support enabled, and udp was a registered stream socket transport.

His SELinux was disabled, so Situation #1 did not apply to him.

This is probably distribution specific.  I’m running Fedora 12, and had no such issues, whereas the person facing this problem was running Fedora 10.

Solution:

Either PHP or the the user running the PHP instance (“apache” in this case), was being forbidden from opening sockets to 255.255.255.255.  It turns out this is somewhat common.  Even when running the script as “root”, you can still get permission denied errors.

I came upon this short comment on php.net about someone else getting permission denied errors on socket_connect() calls.

There was also this comment which showed an easy way to get the broadcast address for the computer’s network interface. This method seems to work, however, it is limited to Linux since it relies upon the following gnu utilities: ifconfig, grep and cut.  It may work if you compile Windows ports to these utilities, or use cygwin.  (Note: The code snippet at PHP.net has errors.  A revised script is pasted below.)

Here’s the way to get the broadcast address:

exec("ifconfig | grep Bcast | cut -d \":\" -f 3 | cut -d \" \" -f 1",$addr);
$addr=array_flip(array_flip($addr));

By getting the broadcast address of the network interface, you can send Wake-on-LAN magic packets to that address rather than to 255.255.255.255.  Doing this, the sockets can be connected successfully, and the permission denied errors were resolved.

Here’s the “fixed” code from PHP.net.  It hasn’t been tested, so you very well may need to modify it for your needs.

<?php
/**
 * Wake-on-LAN
 *
 * @return boolean
 *   TRUE:    Socked was created successfully and the message has been sent.
 *   FALSE:   Something went wrong
 *
 * @param string|array  $mac   You will WAKE-UP this WOL-enabled computer, you
 *                             need to add the MAC-address here. Mac can be
 *                             array too.
 *
 * @param string|array  $addr  You will send and broadcast to this address.
 *                             Normally you need to use the 255.255.255.255
 *                             address, so I made it as the default. You don't need to do anything with this.
 *                       
 *                             If you get permission denied errors when using
 *                             255.255.255.255 have permission denied problems
 *                             you can set $addr = false to get the broadcast
 *                             address from the network interface using the
 *                             ifconfig command.
 *
 *                             $addr can be array with broadcast IP values
 *
 * Example 1:
 *   When the message has been sent you will see the message "Done...."
 *   if ( wake_on_lan('00:00:00:00:00:00'))
 *      echo 'Done...';
 *   else
 *      echo 'Error while sending';
 */

function wake_on_lan($mac, $addr=false, $port=7) {
    if ($addr === false){
        exec("ifconfig | grep Bcast | cut -d \":\" -f 3 | cut -d \" \" -f 1",$addr);
        $addr=array_flip(array_flip($addr));
    }
    if(is_array($addr)){
        $last_ret = false;
        for ($i = 0; $i < count($addr); $i++)
            if ($addr[$i] !== false) {
                $last_ret = wake_on_lan($mac, $addr[$i], $port);
            }
        return $last_ret;
    }
    if (is_array($mac)){
        $ret = array();
        foreach($mac as $k =< $v)
            $ret[$k] = wake_on_lan($v, $addr, $port);
        return $ret;
    }
    //Check if it's an real MAC-address and split it into an array
    $mac = strtoupper($mac);
    if (!preg_match("/([A-F0-9]{1,2}[-:]){5}[A-F0-9]{1,2}/", $mac, $maccheck))
        return false;
    $addr_byte = preg_split("/[-:]/", $maccheck[0]);
 
    //Creating hardware adress
    $hw_addr = '';
    for ($a = 0; $a < 6; $a++)//Changing mac adres from HEXEDECIMAL to DECIMAL
        $hw_addr .= chr(hexdec($addr_byte[$a]));
  
    //Create package data
    $msg = str_repeat(chr(255),6);
    for ($a = 1; $a <= 16; $a++)
        $msg .= $hw_addr;
    //Sending data
    if (function_exists('socket_create')){
        //socket_create exists
        $sock = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP);    //Can create the socket
        if ($sock){
            $sock_data = socket_set_option($sock, SOL_SOCKET, SO_BROADCAST, 1); //Set
            if ($sock_data){
                $sock_data = socket_sendto($sock, $msg, strlen($msg), 0, $addr,$port); //Send data
                if ($sock_data){
                    socket_close($sock); //Close socket
                    unset($sock);
                    return true;
                }
            }
        }
        @socket_close($sock);
        unset($sock);
    }
    $sock=fsockopen("udp://" . $addr, $port);
    if($sock){
        $ret=fwrite($sock,$msg);
        fclose($sock);
    }
    if($ret)
        return true;
    return false;  
}

if (@wake_on_lan('00:00:00:00:00:00')) {
    echo 'Done...';
} else {
    echo 'Error while sending';
}
?>

Technorati Tags: , , , , , , ,

November 24, 2009

Confessions of a Researcher-turned-Engineer

Filed under: Articles,Development,Random Thoughts — Tags: , , , , — Kris @ 1:59 pm

I wasn’t always a software engineer…

Before I began developing software for a living, I used to be in chemical research & development as a biochemist.  For some reason, I always found myself gravitating back towards software and informatics, so I eventually gave in and started a software company.  But I’ve learned a lot of lessons during my time as a scientist.

When solving a difficult problem and you know 25% of the solution, you can figure out 70% through hard work, patience and trial-and-error.  The last 5% may never come, and if it does, is rarely when you’re looking for it.

“I am enough of an artist to draw freely upon my imagination. Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.” – Albert Einstein

Just because you don’t know the answer right now, doesn’t mean that you can made headway and figure it out as you go.  I’ve learned the most when I’ve tried things that haven’t worked, but figured out what went wrong and then tried again.

If you’re not continually learning and improving yourself, your working days are numbered.

“Genius without education is like silver in the mine.” – Benjamin Franklin

Don’t be afraid of challenging yourself and learning new languages, technologies or skills.  Each one of them expands your experience and perspective, and can give you an opportunity to take a look at the status quo.  Unfortunately, self-motivated learners are in short supply but in constant demand because they can adapt to any situation.

If you want to succeed, you have to train for it.

“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.” – Richard Feynman

Let’s face it…  The likelihood that you’re going to “knock one out of the park” your first time up to bat is pretty low.  Whether it’s business or baseball, the odds are against you.  It takes grit, determination, exercising yourself both physically and mentally, and lots of disappointment as you fail again and again.  But each day, train yourself a little more, a little harder, and each day you get a little stronger.  It’s cumulative, and it takes a lot of time.

Give credit to those who came before you.

If I have seen further it is by standing on the shoulders of Giants.” – Sir Isaac Newton

In science, recognition is incredibly important and one of the first lessons that you learn.  When you’re making a presentation, writing an article, or doing your homework, you have to cite any sources of information that when into your work.  This is partly because scientists are focusing on sharing knowledge, and in order for the community to work together, there has to be trust between researchers that their information will not be stolen by one-another.

Even more so, no inventions are made in complete isolation.  They are incremental improvements based on our understanding of our world and everything that has come before.  The iPhone wouldn’t exist without the much earlier inventions of silicon wafers, transistors, plastics, aluminum, and light.  And the next wave of inventions will be no different.

Take advantage of the information and knowledge that we have, but show respect to the community that this knowledge came from.  You probably will need more help from it in the future.

Technorati Tags: , , , ,

Older Posts »

Powered by WordPress