As you may have discovered, we have multiple tools that do similar things.
We have multiple tools that provide survey capabilities.
So now we can sit and figure out what tool is best for a particular scenario – and come up with a clear set of guidelines for the team.
This will allow us to better evaluate what is actually being used, whether we should be using one of the other tools instead, and what is not needed.
—————————–
In this example, I have taken the capabilities and developed a strategy based on careful evaluation of those capabilities.
The most salient requirements and capabilities I used to create this are
– Reporting and my audience for reporting (higher the level, the prettier the pictures need to be)
– Whether I have to (or can) connect a survey to a particular learning object
– Whether I can create anonymous surveys
The most difficult part of this exercise is defining the scenarios. I just listed the ones I either encounter frequently or have seen recently.
Like other educators, we commonly use survey tools for smile sheets, testing and pre-testing.
However, we’ve also been using surveys to help us see measures of whether particular solutions have been creating change and the role of training in that change (the Solution surveys). Those surveys are not connected to a particular course – so using the LMS survey tools (which forced me to connect a survey to a particular item) was out of the question.
I also have a Skills survey scenario. One of my projects last year was to do a skills inventory for a segment of our division. This helps us see what human capability we had in-house and how it potentially measured up to certain planned activities.. From that exercise, I learned why vendors have expensive skills evaluation solutions.
Again, the way I approached (requirements to capability to strategy) generally occurs in Wendy’s Utopian Fantasyland.
The reality normally looks like this.
Leave a Reply