My phone is a fishing lure

Through a series of natural causes/events my phone was in the kayak, the kayak filled with water, the phone is dead and probably is better for catching fish or throwing at large game and rendering them unconscious.

cell

This, of course, means that I have to buy another phone. Note that I did not say a new phone. It seems to me that the cell phone market sucks for anyone who, like me, is hard on a cell phone. There is no way for me to afford paying full retail because I am only months into my new contract.

This model is why I see so many screens with the spider cracked glass. Heck, someone at work actually had a piece of clear box tape over his screen to keep his phone usable to the end of the contract.

I cannot believe that the hardware for a phone actually costs that much. My hunch is that cell phone manufacturers are trying to recoup R&D costs for the phone and all the customizations that they put on top of the base OS. Apparently the competition is so tough that they feel they need to customize in order to distinguish themselves.

Wish someone would figure out a way to make a fast (obviously not going to be cutting edge) phone with base (aka free) version of Android. Rather than going after the performance or feature rich environment they go after the cost. I have seen some base Android phones and the OS is very usable. Negotiate with carriers to not put those annoying apps that kill the battery life. Figure out a different model, maybe even a Kindle with ads-like model.

Thinking about prevention…in this case it would have just taken a ziplock bag. Wish the kayak dude had a box of them for his clients. Otherwise, I guess the only case that may work for me is one of those industrial-double the size of your phone cases. I had an Otter case in the past and they were just too big to put in my pocket. Reminds me of the cucumber scene in Spinal Tap.

In the meantime, I am trying the following…

  • Scouring the used phone sites for a replacement.
  • Using article’s like this to try and save the old phone.

Agile Musing

I was at a LOMA meeting for work last week and was talking to a couple of other attendees about their Agile practices.

loma

It reminded me of some early thinking I was doing back in the 90′s.  I was always drawn to doing things in what people now call an Agile way; but the first time I heard someone actually put words to what I was thinking was a presentation Jim McCarthy did at the 1995 Microsoft Global Summit in San Diego (I think).  I used to have the video on tape but it seems to be long lost at this point.  I found a couple YouTube excerpts but not the who thing.  He went on to write his book Dynamics of Software Development which elaborated on his 21 rules (I think the book has 40 something).  I remember liking his style…oh yeah…and the content was good too. :-)

The Pragmatic Programmer is another book that put more meat on the bones of things that I was thinking or struggling with.  I consider this a timeless book unlike the Peter Norton’s Programming Guide to PC book (the pink shirt book) I recently came across in my archive.Norton

As I reminisce I am reminded about the Agile Manifesto which at first made me laugh but after that tickling feeling passed I took the simplicity and truth of it to heart.  I attached the image I keep on my desk here for posterity.

agile-manifesto1

Not sure what the next Agile is going to be.  I do feel like there is something still missing that  I can’t quite put my finger on.  Hmmm.

Things that were once hard

I needed a quick utility to generate a script for setting the permissions on a massive (wide and deep) directory structure.  The analysis for this was not going well – I asked someone else to try it and they did not get the scope of the issue.  I needed to just get something running quickly so I wrote a quick .NET app to generate what I needed.  I was pleasantly surprised when went to actually grab the permissions for a given folder.

The last time I did something where I was scraping permissions off a folder I was in C++ and the Win32 API.  Yikes!  High impedence for something that I was just going to throw away.

The following is a snippet of code that wrote doing in .NET 4 (I don’t think this code would be any different in .NET 2).

 class FolderPermissions  
 {  
   public string Name { get; set; }  
   public IEnumerable<Acl> Acls { get; set; }  
 }  
 class Acl  
 {  
   public string Name { get; set; }  
   public FileSystemRights Permission { get; set; }  
 }  
 private static FolderPermissions GetFolderPermissions(string pFolderName)  
 {  
   AuthorizationRuleCollection perms;  
   perms = SafeCallToGetAccessRules(pFolderName);  
   var retAcls = new FolderPermissions { Name = pFolderName };  
   var acls = new List<Acl>();  
   foreach (FileSystemAccessRule perm in perms)  
   {  
     if ( perm.AccessControlType == AccessControlType.Deny)  
       continue;  
     var acl = new Acl() {Permission = perm.FileSystemRights, Name = perm.IdentityReference.ToString()};  
     acls.Add(acl);  
   }  
   retAcls.Acls = acls.ToArray();  
   return retAcls;  
 }  

This snippet is where I am copying the permissions for a given folder into my own lightweight structure, so that I could do queries on the structure to help me create the script.

One thing in particular I remember about doing this in Win32 was once I got the SID for a particular identity, it was a pain to resolve that to a name.  Now it is just the IdentityReference.Value.

This is the type of value I like.  Now if I just had a scripting language to do this in so I did not have to compile it would be all set.  Of course there are bunch out there – just I am not as proficient in them as I am in C#.  Hmmm.

IIS 7.5 and 2 Level Auth

We use a large vendor application at work.  We host all the infrastructure for the application inside the firewall, so there is absolutely no access from the Internet.

In IIS6 we configured 2 level authentication – NTLM and Forms Auth.  The vendor requires Forms Auth for the application.  Given the importance of this application and sensitive nature of the data; I also enabled NTLM and secured the site to only people in our division (about 450 people).  There are about 150 logins in the application meaning that 300 people have access to the site; even though they will not be able to actually see any screens until they login.

Through a series of discussions with different audiences; it was decided that there is still enough of a risk of those 300 people being infected with something that takes advantage of cross site scripting or other classic vulnerabilities.  So I further locked down the site using a more restrictive group.  While I feel like we are being a little paranoid about, I capitulated.

Enter IIS7…

images

Our standard for servers is Windows 2008r2 so we are on IIS7.5.  Doing this same 2 level authentication on IIS7.5 did not work.  Why?  Well because of the integrated pipeline…it simply cannot not do both at the “same time”.  One has to come first.  In IIS 6 NTLM always came first since that was done my IIS and then Forms Auth since that was done by ASP.NET.

There are a couple of hacks out there that describe how to work around this.  One of which I found posted here by Mike Volodarsky (formally of the IIS team).  Here he talks about a way to make this work by splitting up the authentication and forcing one to happen before the other.  I was up until well after midnight last night trying to consider how I would make this work given that the application is a vendor application and I don’t have the source code.  Not to mention that everything is precompiled, signed and obsuficated.  All of which add up to…this would be really hard to hack.

Finally, after a bit of chin rubbing…I came to the conclusion that the integrated pipeline may not be the problem at all.  Why do I even still need NTLM?  I mean if the only way for someone to access a web page on the site is to have a valid Forms Auth token then do I really need to force them to also have an NTLM token?  I went to bed content that I just need to leave NTLM behind in this case.

Now I just need to convince everyone that was pushing the original requirement for 2 level authentication that I don’t need it anymore.  Being that they don’t really understand the technology very well – that could be a challenge.  Since the way we got here was through a vulnerability scan of the web site in the first place – perhaps requesting another one will demonstrate my point and I won’t have to make them understand the why.

I will post an update on the outcome.

TFS Recovering Shelveset for Invalid User

One of the developers on the team was getting a TFS error (below) yesterday while trying to access a shelveset for a developer who left the account a couple of months ago.  Turns out he needed some of the code on the shelveset.

 TF50605: There was an error looking up the SID for TC30014  

Note to self…shelvesets are probably not the way to do this, thinking that a branch would better construct.  I want to encourage them to be doing more of these anyway.

The problem is because TFS is going to lookup this user in the Active Directory and the user does not exist anymore.  I can see the shelvesets in TFS using either TFS itself or TFS sidekicks.  I included a screen print of all the active shelvesets for this user in sidekicks.

tfssidekicks

So TFS must be doing some lookup on the user and when it does not find it – errors out.  Not knowing how to solve this, I put couple of searches later (“tf50605 -vssconverter”) out there and found this article.  While not directly what I needed it was enough information to crank up SQL Server Management Studio and start poking around a bit.  So I started with the OwnerId for user that was removed and for the user who was trying to get the code.

 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\TC30014')  
 SELECT IdentityId FROM tbl_Identity WHERE (DisplayName LIKE 'ad2\RMxxxxx')  

Once I had this I plugged the deleted user’s id into the following query to get all the workspaces.

 SELECT TOP 1000 [WorkspaceId]  
    ,[OwnerId]  
    ,[WorkspaceName]  
    ,[Type]  
    ,[Comment]  
    ,[CreationDate]  
    ,[Computer]  
    ,[PolicyOverrideComment]  
    ,[LastAccessDate]  
    ,[CheckInNoteId]  
    ,[DeletionId]  
  FROM [TfsVersionControl].[dbo].[tbl_Workspace]  
  WHERE OwnerId = 276  

This showed me a bunch of workspaces.  What I noticed is that evidently the shelvesets and workspaces are used in a similar model based on the type.  So a little bit of infering and playing in TFS and it looks like if I hack this table, I can reassign all the shelvesets to a valid user (which is sort of the spirit of the article above).

Leaving out some of the details, I ended up with the following query that reassigns the orphaned shelvesets (type=1) from one owner to the other.  Since the WorkspaceName is part of the primary key (and relatively short), I changed the name so that the new owner could distinguish between his shelvesets and those that were reassigned.

  UPDATE [TfsVersionControl].[dbo].[tbl_Workspace]  
  SET OwnerId = 123,  
    WorkspaceName = RIGHT(WorkspaceName + '-Reassigned',64)  
  WHERE OwnerID = 276   
    AND Type = 1  

Looking back at TFS Sidekicks (I verified it first in SSMS – wink) I could see that the more recent shelvesets had indeed been reassigned.  Sucess!!

tfssidekicks2

Now granted, we are on a relatively old version of TFS; so this hack may already be obsolete. But I wanted to put it out here just in case.

WordPress and Word

Microsoft Word has a feature to use Word to compose and publish a blog entry. I have used this periodically and have had mixed feelings about it. Now that I am hosting my own blog using WordPress I wanted to test this feature out again. How does it work with formatting different things and how well does the overall look and feel match the rest of the blog?

Here is some code…

static bool RenameFile(FileInfo fi, string newFullFilename)
{
   try
   {
        fi.MoveTo(newFullFilename);
        Console.WriteLine(“New={0}”, newFullFilename);
   }
   catch (Exception ex)
   {
       Console.WriteLine(“Error {0} renaming {1}”, ex.Message, newFullFilename);
        return false;
}
   return true;
}

 

Here is a picture…

I notice that it does not do multi column or other more advanced formatting normally available in Word. Maybe I will give this a shot since it does give you the robust spelling/grammar checking of Word.

PS.  I had to go into this post from the WordPress editor and clean up the code section.  The different way of single spacing something using <p> vs <br> is the issue.  Every line of code is a <p> when in fact I want it to end with <br>.  Oh well.  Not as good as I hoped.

I took the code above and plugged it into the code formatter I previously blogged about here.  It looks like the following, which in preview mode looks pretty good.

 static bool RenameFile(FileInfo fi, string newFullFilename)  
 {  
   try  
   {  
      fi.MoveTo(newFullFilename);  
      Console.WriteLine(“New={0}”, newFullFilename);  
   }  
   catch (Exception ex)  
   {  
     Console.WriteLine(“Error {0} renaming {1}”, ex.Message, newFullFilename);  
     return false;  
   }  
   return true;  
 }  

 

First WordPress Entry

I have been doing a little more blogging lately and have been growing more frustrated with Blogger each time.  Not that it’s that bad, but it’s not that great either.  I have had my own domain sitting dormant for some time now.  I used to use this as a place out on the Internet where I could test my code “in the real world”.

My wife told me about WordPress a while back and I asked her again about it today.  So I spent the day getting it loaded up, configured, copying over the content and making some customizations.

Overall I like the product.  Especially given the price.

Debugging 101

We had a very nasty issue yesterday (into today) at work.  It involved an issue we saw once before, last year, and never figured out what the issue was.  Well it was back yesterday and it reminded me of some of the important aspects of problem resolution; closely related to debugging skills.

The specifics of this issue are not really important except in that the infrastructure of the system has multiple physical tiers and that it is a vendor application hosted internally.  Multi-tier applications have a higher level of complexity that comes from the fact that there is way more code running that just the application itself (VIPs, routers, firewalls, communication stacks, different platforms, etc).  Vendor applications can just be a pain since you don’t know the internals of what is going on; which means you are making assumptions (aka educated guesses) sometimes. 

Below are a couple of my favorite principles when debugging a issue.

Write Everything Down

  • This not a time to test your powerful memory skills.  You will get tired and you will forget.  As you try things you will find things that work and things that don’t.  If you are lucky you will find the fix quickly. If you are not (it took us 20+ hours this time) then you will have many things that did not work.  These are very important and you are going to probably have lots of them
  • You are going to have people rotating in and out of the virtual team that are going to be a distraction if you have to bring each of them up to speed on what you have already tried.
  • You will resolve this issue at some point and want to restore some of the things you changed.  Write down the current state of the system – “which knobs have been turned”.
  • You may look back on the path you followed to resolution and be able to identify ways to improve the system overall.  If you write this stuff down you will appreciated a couple days later when your life returns to normal.
  • Names and phone numbers.  If you have an diversified organization, as we do, you are going to need/get lots of people coming and going.  Many of these folks will have knowledge of or authorization to change things that you do not. Once they are members of the virtual team you want to keep them since they already have a context which in and of itself is valuable.

Change One Thing a Time
Thankfully this is something that I learned very early on and have tried to live by.  I use the word tried, because I have succumbed to temptation to do otherwise and often lived to regret it.  My implementation of this principle is the following…

  1. Draw a conclusion about what you think is wrong.  In other words don’t go shooting in the dark.  If you don’t know what to do next then stop.  It doesn’t mean that you won’t be doing something soon, but don’t go trying things without first knowing what you think may be the issue may be.  At some point your conclusion will either be correct and you have “solved” the issue or it won’t be and you have eliminated another thing that is NOT the issue.  More on what does not work later.
  2. Evaluate your options for correcting.  Write them down; you may want to try all of them.  This is a good time for brainstorming.  You may want to bring some others into the virtual team for a short time to help out here.  Treat them as consultants (see roles and distractions below) and don’t let them linger too long unless they are able to fit in.
  3. Decide your approach for correction.  One person owns the decision as to what the next course of action is (see roles).  There a million ways of coming to a decision that I will not go into – the key here is that you choose one and let everyone know what the decision is.
  4. Plan your implementation.  This is not as heavy as it may sound.  You don’t want to spend too much time here; not that it is a waste, but at some point in this discussion you will get to a point of diminishing returns.
    1. Identify what you believe the new outcome will be.  
    2. How will you know if it worked?  
    3. Do you know how to roll back the changes you made?  
    4. How are you going to test your change? 
    5. What can go wrong?
    6. What may other outcomes be and what do they tell you?
    7. All things to consider BEFORE you actually implement the change.  I feel another who blog topic just on this point.  If you don’t understand why all these are important things to consider; then I need way more space than I want to spend here to show you why…so I won’t.  Trust me.
  5. Implement the change.  
    1. Identify who is going to do what and make sure they are clear on what they are changing.  Hopefully they are an expert and not learning as they go. 
    2. Pair programming was never more helpful than now.  Work together to ensure accuracy.   You will be getting tired and mistakes will happen.  Put everyone to use here and let them help by watching for gross errors.  Don’t be afraid to show someone how this works; you don’t want the fireworks effect where every time someone types something the entire rooms gasps.  This take patience, it hard to watch someone else type and it is equally hard to be watched.
  6. Run your test.  This is what we have been working for.  Write down the result(s).  Did something totally unexpected happen?  What does this tell you?  What conclusions can you draw?
  7. Repeat.  Look this iteration of conclusions, options, predictions and outcomes and decide what you going to do next.  Do you start over at step 1?  Or someplace in between here and there?
  8. Don’t forget to reset.  You need to choose to restore/reset the environment.  Record what you do and make sure everyone knows the current state. 

Clear Roles and Responsibilities
Important to consider and feels kind of bland in some ways.  Not to mention that this is probably a whole entry in itself.  A couple of things I wanted to record now are..

  1. Who is running the show? Make sure they can delegate.  Make sure everyone respects the decision when it is made.
  2. Who is communicating?  Boy this is a big topic in itself…
    1. What are the different audiences?
    2. Who make the decision to communicate?
    3. Is this the same/related as escalation?
    4. Frequency?  Email?  Phone?  etc.
    5. Blah, blah, blah…
  3. Who is a spectator?  Make sure they know they are.

A good example of poor role definition happened to us during this most recent incident.   The system came back up and an excited member of the team sent an email to the entire customer base that the system was available.  Whoa!  Yes the system did come back up but it was not ready for the business to start using it yet.  We still had not assessed why the system came back up and whether we thought our success was going to last.  What a pain if the system failed a couple minutes later.  Also, the system was still in a debug mode.  We had lots of logs turned on and test settings configured that need to be changed in order to get the system back to it’s production state.  Luckily the users figured out that someone was not quite right and let us know.  We recovered before anything really bad happened but it could have gone horribly wrong.



Did I follow my own principles this time?  I tried.  But sometimes when you have lots of people involved it is just not possible.  People get anxious and/or want to contribute.  They have good intentions but in the end it muddies the whole thing.

In our case we had a couple people off in the corner of the room trying things.  One with elevated privileges and the other with a little bit of knowledge but not a core member of the team.  They started hacking around and without anyone else knowing.  They changed a bunch of things on a test server and found that production environment was back up.  The likelihood that they actually did anything is very low, but now we don’t know since we don’t know what they did or the state of the environment before they did it.  Chaos.  Now we are left with a nagging question.  This has left me with a couple new principles.  It is not very well thought out at this point but I wanted to get it down now before I forgot.

Eliminate Distractions
Distractions come in many forms and they can slow you down or just plain hurt.

  1. Don’t have anyone involved that does not to need to be.  Excitement tends to draw crowds, so you need to know when to put up the yellow tape.  I don’t want to be militant about this because there are some people out there that are comfortable being in a peripheral role (see above) and know when to contribute and when to stay out of the way.
  2. Get to a war room or isolated area that makes all the other principles easier.  
    1. We have several big rooms with 80″ smart board/displays and lots of whiteboard space; which can aid in the documentation.  
    2. They also have table mounted speaker and lots of ceiling speakers for good audio because you will likely have a distributed team and communication with all them is going to be hard enough – forget it if you cannot hear one another.
    3. Getting away from the crowds can keep the crowds away.
  3. Don’t forget the creature comforts; food, drink, restrooms.  These are obvious things that the team will need during a incident; but they can also be distractions.  If the restrooms are way far away; then it just hurts.  If people are hungry they can be distracted.  You also don’t want everyone fending for themselves if you don’t have to.  I kept bringing in food for the team
  4. Get sleep when you need it.  No heroes. If you are getting punchy then are probably going to become a distraction for the entire team.  There are all kinds of studies out there that related being tired to being drunk – don’t debug drunk.  You will swerve over the yellow line.

Understand Vendors in Scope
Make sure you understand the vendor products or services in your application before you have an issue.  What support arrangement do you have with them?  Is it 24×7?  How do you reach them?  Make sure the contact information is current.  What is their engagement / escalation model?  Do they know your environment?  If not how are going to educate them?  Are you sure that the sharing technology (webex, etc) hey use is compatible inside your firewall?  Are you current/do they support the version you are on?

More to say here, but I am running out of steam.  I may revisit this at a later time.



Deep breadth.  I am down here at the bottom of this long entry and liking the brain dump.  Not sure how coherent it all is but it feels pretty good.  Let me know if you find any of this helpful.

As I was wrapping this up I found this interesting article that I though was worth linking to here.  I am constantly amazed at how many topics there are “out there”.

Continuous Integration meet Mr Sarbanes and Mr Oxely

I have been thinking a lot about Continuous Integration, DevOps and related topics where we are building more and more tools to help take the variability out of building systems.  By building systems I am talking about the phase of the SDLC where we begin begin thinking about writing the actual code and the time we deploy that code to production.  When I think of the lifecycle of an application this loop stands out to me as one that gets executed many, many (!!) times.  So it makes sense to 1. have this as efficient as possible and 2. increase the accuracy as much as possible.

We are doing this is couple different ways which I am not sure I can go into much detail around because of company policies.  But suffice it to say that we use a third party build/test tool for our code that checks everything out of source control, modifies configuration, builds, runs unit tests and deploys the code (and documentation).

This year we have a new requirement that has me rubbing my chin quite a bit.  The requirement from the auditors is that (according to SOX) those people in the development role cannot have write access to production bits and those in the deployment role cannot have access to the development bits.  When I say bits I am talking at the run time/deployabe level (not the source code).  The rationale behind this (so I am told) is that this prevents anyone from introducing changes into the process where code is promoted from lesser test environments to production.  Reserving any commentary about how I feel about this policy it is something that we are being required to do.  And given the sensitivity regulators have around the investment management industry and the enterprise approach the parent company takes – it doesn’t really matter how I feel.

My anti-strategy is not use humans separation to implement this.  Having a team just to press a button that I asked them to press seems like a waste.  Sure nothing should go wrong if we are doing this right, but something WILL go wrong.  Can you just imagine the conversation between the deployer and the developer when this happens….

Deployer: “The package failed”
Developer: “What was the error”
Deployer: “Some really big negative number”
Developer: “Can you send me the logs?”
Deployer: “Where are they?”
Developer: “Try this new package”
Deployer: “I don’t see approval from your manager for this”
Developer: “It is 2am my manager is sleeping”

The other issue I see with the human solution to this is that of production support.  When in a time critical break/fix scenario – the last thing you want to have to do is get through a process that does not understand your systems, your business and therefore the context in which they are performing.  Sure you have separated the roles, but at what cost?

My strategy is that by leveraging our continuous integration tool I should be able to accomplish much of this since the deployer is a system/application itself and as long as I can show accountability then we should be all set.  I will admit I am getting some initial resistance to this; but I hoping through a partnership with the auditors we can figure out a reasonable way to do this.

One of the interesting topics around this new requirement is the DBAs – since they inherently break the developer/deployer separation by having access to everything in every environment.  The database seems to me like the perfect place to be doing something that is “not on the up and up”.  Interestingly, the DBAs are considered out of scope for requirement.  Biting my tongue.  Is this implying that developers are inherently less trust worthy than a DBA.  Or is that no DBA is savvy enough to possibly change a sproc (aka code) to do something devious.  Maybe is it that the DBA lobby is that much better than the developer lobby.

 

Recursive Yield Return

Was writing a recursive routine the other day and wondering what an implementation of this would look like should I convert it to use yield return.

Much to my consternation this was not as easy as I thought it would be.  It took me almost a week to get it working.  Not of constant time, of course, but elapsed time.  In my initial implementation I could not get my head wrapped around whether each yield return was going to bypass all the calls on the stack and return a result to the caller OR whether it was going to just pop one call context.  Turns out it is the latter, which greatly complicates the implementation.  Given that the implementation of this particular implementation was escaping me.

I was downstairs meditating last week, not thinking about anything in particular and it hit me.  Like a flash…I could see the implementation.  I ran upstairs and quickly wrote down the rough implementation.  I felt kind of like a musician when a riff for a song hits them in their sleep and they need to quickly write it down before they forget it.

I came back to the code after dinner and put the finishing touches on it.

The first method is the seed method implemented as an extension method on IEnumerable.  It in turn calls the the recursive method.

The implementation below is function that given a collection it will return you a collection of all the permutations (a collection of collections). 

For instance if you pass
[
    [ 1, 2, 3],
    [4, 5,  6],
    [7, 8,  9]
]

This routine will return you 3x3x3 (27) collections, each of which will contain 3 items.  Using the data above here are the first few collections returned…
[
   [1, 4, 7],
   [1, 4, 8],
   [1, 4, 9],
   [1, 5, 7],

]

  
public static IEnumerable> GetPerm(this IEnumerable> domain)
{
return GetPermRecur(domain);
}
public static IEnumerable> GetPermRecur(IEnumerable> domain)
{
var c = domain.Count();
var firstFromDomain = domain.First();
if (c == 1)
{
foreach (var item in firstFromDomain)
{
yield return new[] {item};
}
}
else
{
var domainWithoutFirst = domain.Skip(1);
var permSoFar = GetPermRecur(domainWithoutFirst);
foreach (var item in firstFromDomain)
{
foreach (var curCol in permSoFar)
{
var curPerm = new List() { item };
curPerm.AddRange(curCol);
yield return curPerm;
}
}
}
}
Posted in c#