Where I Lay My Head is 127.0.0.1 – Part 2

Here we go again…

Once again it is time for me to make a professional change. To that end, I have left Burntsand Consulting in order to pursue another opportunity. In my (almost) 2 years at Burntsand I learned a lot about consulting as well as how other companies operate. There were facets of software development I didn’t previously know existed, and have been fortunate to have worked with good people while I was discovering new worlds outside of the Sports Technology field. I find myself again leaving behind many good friends and taking with me some wonderful memories.

That being said, I am very excited to be able to say that I have joined Wintellect’s consulting group as a Senior Consultant. Once again I have found my way to a phenomenal opportunity. I have a long-held deep respect for the people at Wintellect, and meeting them in person has only served to confirm those thoughts. These are some intelligent, creative, and passionate developers who excel at their craft and have built up a top-notch reputation in the industry – I am quite fortunate to count myself among their ranks.

Along with the great people, Wintellect and its members have shown that they share my commitment to participating in the development community, and have a great tradition therein, including publishing books and articles, participation in community events, and even production of the Devscovery conference.

So it is time for another new adventure. As always, I plan to keep folks posted as to how things go. It WILL be fun…

Finding Binding Trouble

Homer: There are three ways to do things: the right way, the wrong way, and the Max Power way!
Bart: Isn’t that just the wrong way?
Homer: Yeah, but faster!
– The Simpsons, “Homer to the Max

I was recently doing some performance work in the neighborhood of WPF Items Controls bound to collections where the underlying collection was likely to be swapped out based on behaviors within the application. The exercise got me thinking about the topic and it seems like a good thing to raise some items that are easy to overlook, but most developers should consider…  

First and foremost, there have always been a variety of collection classes in the .Net framework, and they all have specific situations that make them more or less desirable. This extends beyond the difference between Stacks, Queues, Lists, Dictionaries, etc, and includes having to make decisions about how symmetric the ratio will be between reads/writes, the need to sort, the type of value being collected, etc. Choices about which collections to use and how to use them when providing data for databinding in a UI can have a profound effect on performance and usability of the user interface (while this line of thought was started by work in WPF and naturally extends to Silverlight, it can even apply to WinForms and beyond.)  

Within WPF and Silverlight, the INotifyCollectionChanged interface (defined in System.Collections.Specialized) is at the center of data binding with ItemsControls (actually, IBindingList – which is more commonly used with WinForms data binding – is too…) The focus of this interface is to raise a CollectionChanged event whenever something happens that modifies the contents of the collection. The arguments delivered by this event include the type of change (add, remove, replace, move, and reset) and where applicable, information about the collection’s old and/or new state. The most used class that implements this interface in WPF/Silverlight is the ObservableCollection<T>, defined in the System.Collections.ObjectModel namespace.  

I have often come across this code to expose these collections:  

private ObservableCollection<String> firstCollection;
public ObservableCollection<String> FirstCollection
{
     get { return firstCollection; }
     set
     {
          firstCollection = new ObservableCollection<String>(value);
          RaisePropertyChanged("FirstCollection");
    
}
}
I have 4 problems with this code. First, the public property is exposing too much information about the type of backing store that is being used for the underlying collection. Consumers of this property should not care if it uses ObservableCollection<T>, BindingList<T>, or some other custom implementation. Exposing ObservableCollection<T> places an obligation on methods that call the property to provide an ObservableCollection<T> (or a derivative thereof) and makes the codebase difficult to refactor down the road. Encapsulation is broken, etc.  

Second, the property may very well return a null value for the collection at any point. Consumers of the API that would like to iterate over the collection have to remember to add an extra null-check before every attempt to iterate. If they forget such a check, the application is potentially just a few mouse-clicks away from an unhandled exception. It is that much harder for consumers of the API to “fall unavoidably into the pit of success.” I prefer to ensure that and appropriate enumerable value is always returned for these properties, unless null has a very specific meaning (in which case, there may be a better and more expressive way to indicate such a state then with a null value.) Assume the next developer is going to screw things up…and then don’t let him.  

My third concern is probably debatable, but anyway, the collection reference here is not constant. Whenever the property setter is called, the underlying collection is changed to a completely new reference. This requires that a property change notification be raised in order to tell the binding control that it needs to redo the binding, as the object it had bound to is no longer relevant, and neither its content, nor any changes to it should be reflected in the UI (for that matter, it may very well be time to relinquish the resources and space this original item is using.)  

The final item is discussed at the bottom of this post and is slightly orthogonal to CollectionChanged notifications, but is still at play in the code above…it is basically the use of a developer-provided string to provide the property name that is being changed, in the case where a property change notification is necessary as discussed above. The compiler will not help to call out any kind of fat-finger mistake in the typing of the property name, nor will it help if the name of the property is changed but changing the string argument has been overlooked (this is the case even if constants are used…it is still just a string that gets passed to reflection inside of the binding control, and is subject to the same typo and sync issues that can occur elsewhere.)    

Dealing with the First and Second Issues

For starters, instead of returning ObservableCollection<T>, the property should simply return an appropriate abstraction – more often than not (and especially with the power of LINQ-to-objects), the IEnumerable<T> interface is appropriate. Most binding implementations check to see if the provided object implements a well-known interface such as INotifyCollectionChanged and use a cast to that interface anyway. Next, in order to address the possibility of returning null from this layer of the API the getter is changed to trap this condition and return an empty/degenerate value:  

private ObservableCollection<String> firstCollection;
public IEnumerable<String> FirstCollection
{
     get { return firstCollection ?? new ObservableCollection<String>(); }
     set
     {
          firstCollection = new ObservableCollection<String>(value);
          RaisePropertyChanged("FirstCollection");
     }
} 

Providing a Constant Reference to the Collection Object

In order to accomplish this, I used to use the following code:   

private readonly ObservableCollection<String> secondCollection = new ObservableCollection<String>();
public IEnumerable<String> SecondCollection
{
     get { return secondCollection; }
     set
     {
          secondCollection.Clear();
          if (value != null)
          {
               // An equivalent extension method to the code that follows can be used.
               // secondCollection.AddRange(value);
               foreach (var item in value)
               {
                    secondCollection.Add(item);
               }
          }
     }
}  

This approach was quick to implement, and fairly clear to follow. For small collections and simple situations, it worked fairly well. However, there is one big performance-related problem with this approach. The ObservableCollection<T> dutifully raises a CollectionChanged event every time an item is added inside the “AddRange” loop, which causes the binding control to respond, etc. In simple observations (binding a ListBox to a collection of 10,000 Strings), the performance hit is between 5x and 10x.   

To overcome this gaffe, it is necessary to be able to use an ObservableCollection<T> implementation that can say “when I am swapping out the collections’ contents, don’t raise an event every time an item is added, but instead raise a single event at the end of the operation that indicates the whole collection has been reset.” At least one third party control-company that I am aware of includes something like this in their toolkit, but implementing a simplified version yourself is a fairly straightforward process:   

  • Create a new class that derives from ObservableCollection<T>
  • Add a Boolean variable called suppressNotifications
  • Provide a ResetContents method as follows:
    • Note the suppressNotifications value is flagged when starting the operations, and then turned back off when done, followed by a Reset CollectionChanged notification.
public void ResetContents(IEnumerable<T> items)
{
     if (items == nullthrow new ArgumentNullException("items");
     try
     {
          suppressNotifications = true;
          ClearItems();
          foreach(var item in items)
          {
               Add(item);
          }
     }
     finally
     {
          suppressNotifications = false;
          OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
      }
}  
  • Provide overrides for the ClearItems and InsertItem virtual methods (defined in Collection<T>), which are used internally within the Clear and Add methods, respectively.
    • ObservableCollection<T> implements overrides for these that raise the events we want to avoid for now.
    • Note that if events are not being suppressed, the collection just defers to the base implementation from ObservableCollection<T>. Otherwise, it just checks for reentrancy and then proceeds to insert items without raising any events.
protected override void ClearItems()
{
     if (suppressNotifications)
     {
          CheckReentrancy();
          Items.Clear();
     }
     else
     {
          base.ClearItems();
     }
}

protected override void InsertItem(int index, T item)
{
     if (suppressInsertEvent)
     {
          CheckReentrancy();
          Items.Insert(index, item);
     }
     else
     {
          base.InsertItem(index, item);
     }
}

For this implementation, performance is again on par with that seen when the entire collection is swapped out with and a PropertyChange notification is raised. The final implementation is as follows (note that because an IEnumerabe<T> is being returned, the actual backing type is irrelevant to the consuming code:   

private readonly EnhancedObservableCollection<String> thirdCollection 
new EnhancedObservableCollection<String>();
public IEnumerable<String> ThirdCollection
{
     get { return thirdCollection; }
     set
     {
          thirdCollection.Clear();
          if (value != null)
          {
               thirdCollection.ResetContents(value);
          }
     }
}

While these three changes required a little extra (boilerplate) work, the net result is code that the consumers of this API will have a much harder time using incorrectly:   

  • They do not have to be concerned with the precise backing type, and the result is Linq-friendly if they want to find out about its contents (eg call Count(), etc.)
  • They can safely iterate over the collection without worrying about whether it has been set to null.
  • They can take and hold a reference to the collection and not worry about whether or not some other code has called the setter, leaving a stale reference.

One Final Note – Property Change Notifications

Back to the issue about “magic strings” and the INotifyPropertyChanged interface. Even in .Net 2, developers sometimes went to some lengths to provide some level of either compile-time checking or debug-mode assistance. Options I have seen include using reflection to determine the name of the property that is calling the “RaiseXXX” helper function, using reflection only when in debug mode to verify that the name provided maps to an actual property and raising an assertion to try to make the issue more obvious during testing, using constants or enumerations for the string values, etc. With .Net 3 and LINQ constructs (specifically Expression Trees), there is a new and elegant solution that involves the compiler in the process of ensuring that valid entries are provided, and though it does have a performance impact, it is important to remember that when binding, reflection is usually used from the item doing the binding anyway, which brings along its own set of performance considerations/limitations. Wintellect’s
Jeremy Likness has a great, detailed, and more complete writeup (required reading!) on this approach, but here is a simplified implementation:   

public event PropertyChangedEventHandler PropertyChanged;
private void RaisePropertyChanged<T>(Expression<Func<T>> propertyBeingChanged)
{
     if (propertyBeingChanged == nullthrow new ArgumentNullException("propertyBeingChanged");
     var memberExpression = propertyBeingChanged.Body as MemberExpression;
     if (memberExpression == nullthrow new ArgumentException("propertyBeingChanged");
     String propertyName = memberExpression.Member.Name;
     var tempPropertyChanged = PropertyChanged;
     if (tempPropertyChanged != null)
     {
          tempPropertyChanged(thisnew PropertyChangedEventArgs(propertyName));
     }
}

And the call to raise the property change notification, in the context of our original property implementation, above:   

private ObservableCollection<String> firstCollection;
public IEnumerable<String> FirstCollection
{
     get { return firstCollection ?? new ObservableCollection<String>(); }
     set
     {
          firstCollection = new ObservableCollection<String>(value);
          RaisePropertyChanged(() => FirstCollection);
     }
}
Notice the Lambda function used in the call to RaisePropertyChanged. Miss the property name by so much as a letter (assuming no other property matches the new name), and the compiler will protest. This does come at a performance price, however. In my tests, the "Compiler-Safe" option can take as much as 200x as the "magic string" version, which on my dev laptop was the difference between five one-hundredths of a second and one full second to raise 100,000 PropertyChanged event notifications. How big of a concern this is may largely depend on how many property changes you anticipate need to be completed within your bound data, which brings us back to the original discussion about choosing collections wisely in one's application implementations. If your properties are likely to change only through user interaction in the UI, there's probably a reduced need to worry about millisecond-range performance issues like these and to favor an implementation that reduces the amount of run-time debugging that will be required (but it is still important to know about and evaluate these issues.)   

Sample Code Used for Timings

Remote Debugging Silver Bullet

I have previously been stymied several times while trying to use Remote Debugging to troubleshoot a problem when the machines in question are not on the same domain (or when my development machine is not on any domain at all.) While this worked (apparently it still does) several years ago when I worked mostly on unmanaged code, I never could get past the issues when trying to do this with Managed Code.

Once again, John Robbins has ridden in to the rescue with an article providing guidance how to get past this very issue. His writeup can be found here. I almost can’t wait to have some server code I wrote go out to lunch so I can try this out (OK…I can wait…but now I have a new trick to try when it does…)

This solution will also probably benefit from being paired with the SysInternals ShellRunAs utility.

A Few Thoughts about Windows Phone 7 Development (so far…)

So it has been a few months since MIX and the announcement of the availability of developer tools for the Windows Phone 7 platform, and I have managed to put on at least one presentation about the upcoming Windows Phone 7. With the target of shipping in time for Holiday 2010, I feel it isn’t an unreasonable assumption to believe that the phone will probably start to surface in October / November of this year, which is only 4-5 months out. So, what are my thoughts so far?

Obviously, with the tools just at CTP-stage right now, room for improvement is to be expected. Silverlight as the primary platform for applications is a great thing, but hopefully we can move off of Silverlight 3 and onto Silverlight 4 before too long…because Microsoft is retaining control of the platform instead of letting the carriers control it (read – continue to royally screw it up), hopefully an update to the Silverlight runtime will follow before too long.

Developing against the emulator experience is nice, but lacking, and of course, we “mere mortals” don’t yet have access to real hardware. Most hardware services within the emulator are simply not available, making it challenging to pursue applications that take advantage of these services. Some solutions are available to simulate or mock the hardware input (using the Reactive extensions, etc.), but in my opinion these options put too much responsibility for anticipating the real-world hardware’s real-world behavior in developers’ hands – again many of whom have not yet had a chance to even touch a piece of real hardware. What’s more, most of the provided hardware interaction API’s expose concrete classes; for mocking and simulation, interfaces would have been better choices in my book, and would have involved fairly low overhead in the API design. Regardless, it would be nice if the emulator could provide pre-built facilities for simulating interaction with this hardware. For example:

Location Services

The emulator should provide an emulator-adjacent window with an embedded Bing Maps control. The phone’s hosted virtual OS should have a hardware driver that uses the map’s coordinate values and updates instead of an actual GPS receiver. From a developer’s point of view, this would be preferable to any option that used the host PC’s GPS, as the software could be tested without taking the host machine literally “out for a drive.”

Accelerometer

Another external device, the emulator’s accelerometer data should also be provided by an external UI. There are plenty of applications available today that provide interaction with a 3-d view of an item (I know I have used the one at ClosetMaid several times to help design the layout for closet storage in my own house), and such a UI would again be an interesting option for testing applications (without trying to physically rotate a development system.)

Other Hardware Devices

For most of the other hardware devices, the emulator application should provide access to the devices of that type on the host machine. Connection to a specific device should be available through options in the emulator’s host application (such as which web-cam to use of those available on the host machine, which microphone to use for input, etc. See Skype as a reference application for choosing these settings.)

Figure 1 – Conceptual Thoughts on Extending Hardware Emulation for the Phone

In Summary

Obviously, this writeup has been more critical than complimentary. Just because QA people focus on reporting defects, doesn’t mean that don’t like or believe in the product they are working on…they just want it to be better. My intent here is the same. Having focused my development career (so far) on the Microsoft stack, and seeing that mobile/smartphone applications are going to continue to be more and more important as development targets, I am anxious and eager for Redmond to get this right. The readiness of useful, well-written applications in the phone Marketplace will be critical for the success of this platform (Microsoft is clearly aware of this – just look at the “Hey Windows Phone, I Need This App” contest.) The quality and completeness of the tools that we as developers have available will directly impact the quality and completeness of the applications that will make it to the Marketplace.

New Hampshire Code Camp Content

Another code camp under my belt, this time the NH .Net Code Camp…I swore after the last code camp that I would never do 2 presentations in one camp again. This time I did 3. I won’t promise not to do that again, because at the rate I’m going, I’ll end up doing 4.

The first presentation was An Introduction to Silverlight Development. Despite pruning some content from the last time, I still ran a little long, so the presentation still needs some tuning. CONTENT HERE.

The second was my talk on Silverlight Line of Business applications. I made one glaring mistake, but thankfully I was able to recover (and remember to add the OrderBy statement to the Domain Service.) CONTENT HERE.

The final presentation was Windows Phone 7 Development with Silverlight. This was the first run for this demo, and it (not unexpectedly) needs work, plus the fact that it is CTP code and the inherent limitations…I am extremely interested in the upcoming Windows Phone product and the idea that it brings development for the new-generation smartphones to the .Net/Silverlight development crowd. I really hope the teams responsible for orchestrating both the initial release and subsequent updates gets this 110% right. Time will tell, but I am hopeful. CONTENT HERE.

Also during the day I saw Talbott Crowell’s talk on F# and Silverlight, as well as John Bowen’s talk on “Thinking in XAML.” There was a great little insightful moment during John’s presentation where he put together a really good description of the relationship between Controls, Templates, and Styles. I’ve heard and red various descriptions of this relationship, but for some reason this one resonated. To paraphrase the explanation a little bit… CONTENT HERE.

“A control is a set of behaviors defined in code, NOT what you see on the screen. (Eg a button is really defined by a click behavior, not a grey rectangle.) What you’re seeing rendered on the screen is not “the control”; it is the result of interpreting / processing a Template for that control. Styles are a collection of desired values applied to behaviors or properties.”

This camp was also unique in that I was able to volunteer some time to help organize the event (I coordinated the day’s schedule and pulled together the syllabus content as well as the review forms.) Seeing things from behind-the-scenes provided an interesting point-of-view. Many thanks to Pat Tormey and the rest of the organizing crew for pulling the event together.

I also have to mention a couple of upcoming related and worthy events that unfortunately I cannot attend for personal reasons – the New England GiveCamp (June 11-13), and the Connecticut Code Camp (June 19).

As always, many thanks to my wife, kid, and cats for putting up with me in the weeks leading up to the event.

Visual Studio 2010 Tips & Tricks

Visual Studio is a great tool for developing code for the Microsoft technology stack. There are other players out there, but so far, Visual Studio is the king of the road. Most of the developers I know spend the majority of their time working in Visual Studio in one way or another. However, like many productivity applications that have been around for as long as Visual Studio has, the list of commands tends to have grown much larger than many people actually realize.

Up until recently, Microsoft’s Sara Ford had been publishing a daily “Visual Studio Tips & Tricks” Blog target at Visual Studio 2008. She was even able to put together a book of these tips, the profits of which were used for Hurricane Katrina relief in Mississippi. Recently, Sara has passed the mantle on to her Microsoft colleague Zain Naboulsi, who publishes his daily tips here.

While Zain’s tips focus on Visual Studio 2010, he has been good enough to be sure to call out which ones are also applicable to earlier versions of Visual Studio.

There’s also a Visual Studio 2010 Extension that will put the tips right into your Visual Studio Start Page.

 

Enjoy!

Boston .Net Code Camp 13 Content

I had the chance to speak about Silverlight again at the latest Boston .Net Code Camp… Two presentations this time, and the “demo gods” were fighting me all the way there (an external HDD flaked out on me 2 days prior, losing a couple nights’ content. Important lesson about backups re-learned…) Several really late nights later (and endless patience from my wife, daughter, and cats) I pulled things together again.

I gave two presentations. The first was an introduction to Silverlight development. One of the downsides to doing anything whose title starts with “An Introduction” means you get the first slot in the AM, long before the caffeine has had a chance to seep in. It went OK…I think I let the early introduction linger too long and didn’t dive into the code early enough, rushing the latter part of the conversation.

The second presentation went over using Silverlight as a tool for business applications…after some early hiccups it went fairly well. Showing the SL Toolkit’s Graphing capabilities for data expressiveness was well received, as was using the Bing Maps API to geocode addresses. The ease of printing in Silverlight also got some good reactions.

I have uploaded slides and code – the Intro content is available HERE and the Business content is available HERE. Note that the demo that includes interactive Bing Maps has had my personal access key removed/sanitized. To obtain your own key, please visit URL and substitute your key in the MapHelper class constant that contains the test “YOUR OWN KEY HERE.” You can obtain your own app Id here: http://www.bing.com/developers/appids.aspx

Also, I found out what why the printing demo cut off the map. Because I was just using a stack panel and taking the screen elements as laid out, when using the resolution of the projector at the presentation, some extra whitespace was included in the text address portion of the display. I slightly changed the layout of the grid used in the text portion, then changed the printing code to use a grid instead of a stack panel (in order to dynamically adjust the amount of space between the address and the map.)

Enjoy! I would like to extend my thanks to Chris Bowen, Chris Pels, the sponsors, speakers, and especially the attendees. Please let me know if there are any questions.

Looking Back at PDC 2009 – General Thoughts

So coming home from PDC turned out to be much different than expected…my wife has been bedridden with a cold since I stepped back into the house on Friday night. I was hoping to reflect a lot more on PDC over the weekend, but alas that was not to be (although it was nice to spend so much time with my daughter after being gone for a week.)

I got a lot out of PDC, but like a lot of things, it may not have been what was directly intended to be delivered, but rather what was between the lines that mattered. It was hard to get a solid track for session attendance – I tended to be all over the place…I think my next conference will have me going to a very narrow-focused set of sessions and then catching the videos for what I missed. Regardless, having the videos available is handy, and I’ve already watched several for sessions that I had to skip for one reason or another.

From a high-level, my thoughts are as follows:

Azure: Raymond Chen once blogged about the true measure of when a project is “real” being when stakeholders start talking more about what it won’t do than about what it will do. Azure seems to be there. A lot of the general-high-level functionality is in place, and they’ve managed to plugged some significant holes in very short-order (eg. single sign-on…) “Dallas” is big, and I think that once I am able to cobble together some demos, some people I know will find it irresistible. The general place where Azure lives and/or will live is in the ability to scale ASP.Net applications up to Azure (an interesting idea is to keep existing data centers, but use Azure for redundancy and for elastic scaling…) and soon to be able to revert Azure applications back down to ASP.Net and private data centers.

Silverlight: With the enhanced LOB features in SL4 (printing, right-click context menus, shared assemblies, etc.) and especially with the ability to run standalone SL with enhanced trust, the line between SL and WPF is getting incredibly blurry. Bottom line, Silverlight continues to be a platform worthy of the time spent becoming familiar with it.

Parallelism: The content here was not really new – Moore’s law seems to still be predicting 80 core machines in the not-too-distant future. The new .Net 4 support for parallelism builds nicely on top of what is already in the framework, but unfortunately there’s still nothing to replace the requisite “InvokeRequired” boilerplate checks at the presentation layer. This becomes problematic when a layer outside of the presentation layer gets refactored to use threading…the UI layer isn’t written expecting it, resulting in a runtime exception. Ideas involving an application-level attribute or other high-level approach to baking the thread marshaling code into the UI framework controls themselves would probably go a long way, and conversely, the absence thereof will probably stifle the extensive use of parallelism in real-world applications, due to the perceived complications that its inclusion introduce.

Data: Put simply – goodbye LINQ-to-SQL. With EF4, there’s really not much need anymore. The ability to come at data from either model-first, code-first, or database first is really helpful…from my perspective, when doing “Greenfield development“, the Model is My Truth, and both the code and the database are simply implementation artifacts.

What was missing: Ray Ozzie’s talk of “Three Screens and the Cloud” began to ring hollow when it became clear that Windows Mobile was taking a back-seat at this conference…no talk of WM7, Silverlight Mobile, etc. What happened to the “little screen?” Just saying “we don’t have it yet, but we’re working on it and just wait…it’s going to be awesome” would have gone a long way. Not mentioning it actually turned it into the 800-lb gorilla in the room.

Also, what happened to the Live Services story? What about Live Mesh? It looks like these topics are being taken back into the garage for a retune, and their inclusion in last year’s introduction of the Azure stack may have been inadvertent noise. I have found Mesh in particular to be a very useful tool, but its lack of any relationship whatsoever to SkyDrive is perplexing.

Finally, if this year is any indication (and it may not be), it looks like the PDC may be being positioned for a new identity. With other conferences like Mix, SQL-PASS, and SharePoint-specific conferences, among others, it may be time to make TechEd IT-specific and bring TechEd’s developer content into PDC. I felt the “split approach” taken by TechEd in 2008 (1 week for dev, 1 week for IT) worked out nicely. Time will tell…

As I said, it was a good conference for many reasons for me. In addition to the show contents, there were interesting networking experiences. I’ll be posting about individual technologies in the coming few days.

I’m Off to PDC (or Contracting Encephalitis)

I’m off to this year’s PDC event in Los Angeles. From the logo, it looks like attending this conference will result in horrific brain swelling. It’ll be several days of immersion into the latest and upcoming Microsoft development stack. This year, I’m particularly interested in Silverlight, SharePoint 14/2010, Azure, and what’s new in parallelism. I’m also looking forward to being able to engage with representatives from JetBrains, DevExpress, and RedGate (especially in regards to the exciting new things they are doing with Reflector Pro!) to discuss their productivity tools and their inclusion in Visual Studio 2010.

I’ll be at the MSDev Booth on Wednesday from 12-12:45 as part of the Partner program, basically to let people know of my and Burntsand’s success stories with the Microsoft tools and products.

MaxiVista is back!

It always felt a little odd that a product called MaxiVista was (somewhat) incompatible with Microsoft’s Vista OS. However, after a bit of a wait, the folks at MaxiVista seem to have overcome their technical hurdles and the product is back with MaxiVista v4.

For those who may be unfamiliar, MaxiVista is a software application/driver that allows:

  • a remote PC to act as a second (or third, or fourth) monitor
  • remote control (K/M) of a second PC
  • clipboard sharing between PCs
  • display cloning between PCs

This all takes place over a regular LAN connection. In the past, I have used this software while traveling to allow me to use a low-end laptop to act as a secondary display to my main development PC, which is a lot more convenient than trying to travel with an actual display, LCD or otherwise. All you do is run a small Viewer application on the “slave machine” (no installer required) and the main PC can discover it and make use of it.

The product supports Windows Vista and Windows 7 in both 32 and 64-bit configurations (and of course still supports XP, 2003, 2000, etc.)

If you are as much of a fan of using multiple-monitors as I am, MaxiVista is definitely worth a look. They do offer a time and run-limited free trial version.