What do you call the people you write software for?

For most of my career I have referred to the people who I write software for “users”.  It just made sense, they are using the software so they are users?

Interestingly enough at Amazon we call them customers.  Not to say that the word user has been stricken entirely from the vocabulary, but it makes me wonder if it should.  Here is my thinking.  Something different happens in my mind when I think of the people I am writing software for when I use the word customer.  It is hard to describe…but maybe this story will help you understand what I mean…

When I worked at Microsoft I worked in the consulting division and we would help people make sense of the variety of ways to write code on the Windows platform.  Often times some of my customers (aka clients) would have come up with a unique way to solve a problem using some piece of Microsoft software.  When I would relate this back to the product team (another aspect of my job).  More times than not the question I would get back would be “why would they do that – that is not what we intended”.  We used to refer to this as the RDZ – reality distortion zone; which was the invisible field that hung over Redmond that prevented the product teams from understanding how people really used Microsoft product.

When I think about how a customer centric Microsoft would have been different – then the response back to me would have been something like “that is really cool, we never thought of that – how can we make it better”.

Try it.  It may just change the way you think about things.

Firehose Treatment – Open Wide

I needed to take a short bit of time off from blogging while I worked out the details of interviewing, negotiating and relocating (at least me) to Seattle from Connecticut.  I am now into my second week of work at the largest online retailer and the fire hose is blasting full force.

Being this big means that someone has already done a lot of bernsteinbookthinking about how to make something massively scalable.  Back in “the day” I remember pouring over the Principles of Transaction Programming book by Bernstein.  I knew this stuff inside and out and it still serves me.

 

Given the massive need for scalability I have had to dust of some new/old theories. ACID is out BASE is in.  Sure I have read about a bunch of these over the last few years, but it is different being at a place that is actually doing it.

Here is a list of things blowing my mind today…

  1. Eventual Consistency - It will get there when it gets there.
  2. Anti Entropy – Anything with the word entropy in it I find confusing.  So what is Anti-Entropy? :-0
  3. AWS - I had to pay for this before, now everything we deploy is already running on this.  Note to self; shutdown my services and save a couple bucks.

Keep It Simple (kiss) Revisited

I have a calendar from a vendor we use that has some of the classic coding and design principles – one for each month.  I was rubbing my chin staring at it this morning and I wanted to share what popped into my head…

While I am sure that the KISS principle has been written about (perhaps to death) I had another instance of this today as it applies to operations and infrastructure.

Quick background – I recently inherited an Operational group.  Operations is the clean up crew of development here.  While I understand the rationale of separating them, I think I like the idea of developers supporting their own code so that they better understand the impact of what they do.  What a great teaching tool – you want to not get up in the middle of the night – fix the code, do a better job in the first place, write a utility to help you out.  We have a bunch of applications that have been around for years and over time the developers who maintain many of these have moved on.  So today I asked the question of someone about two AD groups and what they are used for.  In both cases the answer was initially I don’t know – and later the answer became these are not used anymore.

Part of keeping systems simple is getting rid of the things that are not used anymore.  We have all these extraneous moving parts that we don’t need.  This just creates system bloat that should be easy to remove.

Granted you cannot get to everything – right now.  But this stuff has to get cleaned up over time.  Putting it into some sort of maintenance, wish list or Kaizen log seems like an easy thing to do.

All it takes is discipline.

IIS 7.5 and 2 Level Auth

We use a large vendor application at work.  We host all the infrastructure for the application inside the firewall, so there is absolutely no access from the Internet.

In IIS6 we configured 2 level authentication – NTLM and Forms Auth.  The vendor requires Forms Auth for the application.  Given the importance of this application and sensitive nature of the data; I also enabled NTLM and secured the site to only people in our division (about 450 people).  There are about 150 logins in the application meaning that 300 people have access to the site; even though they will not be able to actually see any screens until they login.

Through a series of discussions with different audiences; it was decided that there is still enough of a risk of those 300 people being infected with something that takes advantage of cross site scripting or other classic vulnerabilities.  So I further locked down the site using a more restrictive group.  While I feel like we are being a little paranoid about, I capitulated.

Enter IIS7…

images

Our standard for servers is Windows 2008r2 so we are on IIS7.5.  Doing this same 2 level authentication on IIS7.5 did not work.  Why?  Well because of the integrated pipeline…it simply cannot not do both at the “same time”.  One has to come first.  In IIS 6 NTLM always came first since that was done my IIS and then Forms Auth since that was done by ASP.NET.

There are a couple of hacks out there that describe how to work around this.  One of which I found posted here by Mike Volodarsky (formally of the IIS team).  Here he talks about a way to make this work by splitting up the authentication and forcing one to happen before the other.  I was up until well after midnight last night trying to consider how I would make this work given that the application is a vendor application and I don’t have the source code.  Not to mention that everything is precompiled, signed and obsuficated.  All of which add up to…this would be really hard to hack.

Finally, after a bit of chin rubbing…I came to the conclusion that the integrated pipeline may not be the problem at all.  Why do I even still need NTLM?  I mean if the only way for someone to access a web page on the site is to have a valid Forms Auth token then do I really need to force them to also have an NTLM token?  I went to bed content that I just need to leave NTLM behind in this case.

Now I just need to convince everyone that was pushing the original requirement for 2 level authentication that I don’t need it anymore.  Being that they don’t really understand the technology very well – that could be a challenge.  Since the way we got here was through a vulnerability scan of the web site in the first place – perhaps requesting another one will demonstrate my point and I won’t have to make them understand the why.

I will post an update on the outcome.

New OLEDB Provider for Reading Excel 2007

I work for a financial company that uses alot of Excel. Many of the the business users here practically live in it. So we are constantly trying to figure out how to leverage Excel in our applications.

Do we just export data to Excel? If so, then is it a snapshot/copy of the data or do we build a connection to the backend data? What about importing data? Where is the boundry between using VBA and VSTO? Then if we pile SharePoint and Excel Services on this heap it starts to get really interesting.

One of our technical frustrations has been the OLEDB driver for reading Excel on the server was fairly lame. It made alot of assumptions about the data that made it nearly unusable except in the simpliest cases. Last week I found this updated Provider for Excel 2007 and I am looking forward to giving it a deeper look. What I can say is, that it did read in all my data rather easily. I just have not had time to play around the fringes much.

Download details: 2007 Office System Driver: Data Connectivity Components
href=”http://www.microsoft.com/downloads/details.aspx?FamilyID=7554F536-8C28-4598-9B72-EF94E038C891&displaylang=en

UI Testing

I spent a bunch of time in the early Windows days trying to do UI testing the way this (UI Test Automation Tools are Snake Oil) blog entry talks about. Like him (or his clients) we used some really expensive tools and ended up not doing a very good job. I really like the thinking Michael is doing here. This is definately where my head is at. The problem is that I am struggling with creating MVC-style applications.

Where I work now we just don’t build big applications. Instead we have lots of small applications that we deliver in weeks not months or years. I have not found this pattern of doing software very condusive for building applications with lots of design. Now hold on a minute – that does not mean we don’t do design. We just don’t do lots of design. When an application is very small how much design do you really need to do? Most of the applications tend to look like each other – read some data…munge it together…display it. We don’t do much data entry; which is an exception in the pattern of apps I have built over the years.

That is not to say we don’t have some big-ish applications. We do. Just that they are the exception. Could they do with more engineering? Absolutely! But we just don’t have the infrastructure (staff, mindshare, experience, etc) to do it that way. Of course there are people doing a high level of engineering here. It’s just that it’s not everyone – it’s not our default.

At first this was a hard pill to swallow (and it still makes me a little gassy at times). But it’s the nature/culture of the way we do things. It’s a model that works, but not in a scaleable way. Sharing anything in this model is very hard – maybe I will blog later about how we do that.

Brick Wall – Bang Head

I had 17 (the number is not important except that it is more than one) Excel files I needed to get into a database and since wee happen to use SQL Server – I thought of SSIS. I was going to leave my trusty C# hammer in the tool bag for a more specialized tool. I was confident that even though I had not used SSIS for much (mostly trivial imports from SSMS) that I could get something running pretty quickly.

Well I could not have been more wrong.

I spent the entire day working through a series of SSIS issues specific to the problem I was trying to solve . The final issue was that a couple of the cells in Excel have more than 255 chars of data; what I nightmare trying to get the Excel driver to read more. And now that it is – its in an NTEXT data type which is practically useless when I really want a String. What a mess! Not to mention that I enlisted (wasted?) some help from other people (at least 4) who are much more knowledgeable on this technology than I.

The question I am asking myself is not whether to use a different tool (my trusty C# knife); but when I should have “cut bait and run”?

Every time I ran into an issue yesterday it felt like I was getting closer and closer to being done. Problem is that now feels like there was an infinate distance to travel; so closer being relative – i was never going to be done. At what point do I should I have realized this? Is one day too long to have been doing this? Is this just a case of arrogance and/or stuborness?

Just one of those things that makes me say Hmmmm.

It’s hard to explain to people who don’t do what I (we) do for a living this type of situation. I wonder how much time is spent / wasted doing just this sort of thing; using the wrong tool for the job. How many pople just keep banging a bent nail? Sure it works, but it’s so britile you can never change it. I guess I could have just dropped a script component on the SSIS design surface and written the entire thing in C#. Too bad the debugger does not work. I am just too old to debug something using MessageBox.Show(). Geez! I am so anal about debuggers that a friend and I (he did most of the heavy lifting) wrote our own debugger for a Basic compiler we were using back in the early 90′s!

I think I gave SSIS a fair chance. Now it’s time to get this thing done.

What I am currently reading

Pragmatic Thinking and Learning: Refactor Your Wetware

I am only 50% through. No code samples or object diagrams in this book. It’s more about how to approach what I do as opposed to specifically how to do it. I like have been thinking more about this myself and am finding this book very interesting.

I liked the book by the same author (with others) called Pragmatic Programmer. Bernie turned me onto that book when it first came out and I recommend it highly. It’s still as pertinent as the day it was written.