Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Tuesday, March 06, 2012

The Individualized Education Program: IEP

Article first published as The Individualized Education Program: IEP on Technorati.

Child with his Special Education Teacher.
The IEP, or Individualized Education Program, is a tool used by educators to keep track of your child's progress through their disability. The process involves a lot of people, but is there for one purpose: to help your child get as close to mainstreaming as possible, and keep them educated with their peers. But the process can seem a little scary, or perhaps a bit intimidating, and a lot of parents seem to feel lost when they meet for their IEP with their team.

An IEP is generally generated after the school system has evaluated your child and determined there is a significant learning disability of some sort. Autism falls in that category, as does any kind of hearing, sight, speech, or developmental delay. Once the team has determined a disability or delay is obvious, they look for reasons behind it. It should be noted that language barriers or "bad teachers" are not reasons for an IEP, just actual disabilities that impair a child's ability to learn.

It should also be noted that the IEP process is different with each State, though just about every State Board of Education has the IEP process, and it should cover just about all the same services. If you have any questions regarding your specific State's IEP process, contact your school district's Special Education office. They can give you all the details you need.

Once you have a confirmed reason for an IEP, the process starts. At this point it doesn't matter what the school's determination is for the reason, because the IEP is not standard for each "diagnosis". Rather, it's designed to be customized to the needs of the child, as determined by the IEP team.

So who is this team I keep talking about? First and foremost, it is you as a parent, and your child. In addition, your child's Special Education teacher should be present, a representative from the school's Administration team (principal or delegated representative), any General Education teachers involved in your child's education (including any that advise but don't teach directly), and someone that interprets any evaluations given the child (generally a school psychologist).

The first three are pretty obvious: you, your child as the student, and the Special Education teacher. But why have a member of the administration there was well? Well, IEP's are conducted by State regulations, and has the ability to allocate funds if necessary. They are often called Local Education Agency (LEA) Representatives, though they are generally from the Principal's office.

Also, why have a general education teacher if your child is only in special education? Well, a general education teacher is there to represent the requirements expected of mainstreamed children. They know what the required educational goals are for typical students, and can help the Special Education teacher set goals that will bring the child closer to being mainstreamed. Because that is the goal of every IEP: to bring the child up to their grade-level expectations, and keep them there.

So now you have your team, but what do the do? Well, they start by having written statements about present academic achievements and functional performance. These are done through either direct or indirect assessment (tests or observation), and evaluate where the child is currently performing. It forms a foundation on which the team can plan how to build your child's education program. Another way to look at it is a road's starting point.

Next, your team will write down measurable goals for both academic and functional performance. Measurable means that your child will need to accomplish certain tasks X number of times, or attend to an activity for X number of minutes. Percentages doesn't mean anything in the measured world of your child's education, so don't let anyone get around you by adding that as a "measure". Focus on achieved results within specific time-frames and X number of times. If you have any questions on how something would be measured, it's not clearly measurable, at least to you. It may also help if you ask how it will be evaluated and assessed, that way you can do the same things at home to build on what your team has in store.

Next comes written statements on whether or not your child should attend to specific activities (for the sake and safety of the child), any needed accomodations for measuring academic achievement (someone to read questions to the student that can't see or read, for example), and when the services will start, stop, and how long they will last. All this should be in your child's IEP, and you should recieve a copy! Interestingly enough, our oldest son's teacher will send us a copy of the IEP ahead of the team meeting, so we have a chance to review and come in prepared for discussions. It's unusual, but definitely welcome!

So what should you as a parent ask when you meet with your team? Well:

  1. Are the IEP goals measurable? I focused on this quite a bit, and for good reason. Make sure you can measure your child's progress, and then you know your teachers can as well.
  2. Is my child in a regular education environment all or part of the day? Why or why not? If you feel your child should be allowed into a mainstreamed classroom, voice that concern! This is your opportunity to make sure you are heard.
  3. Does the IEP list your child's accomodation needs? That includes any State-side and district-side testing needs, not just the teacher's testing needs. That way you know when they get their Standardized Testing done, they have the accomodations and modifications necessary.
  4. Are the goals realistic for my child? Is the school listing goals YOU feel are attainable for your child? This is a tough question for everyone concerned, because you want to set goals that can be reached, but not so simple as to not challenge your child. The good news is that if the goals are reached "too soon", then they can easily and quickly be revised!
  5. Is my child expected and able to meet graduation requirements? Graduation is something that every parent looks forward to, and every child should as well. But you need to make sure the goals are working your child toward that point, even if their disability makes it seem almost impossible now. Always push for that point in your child's future, and look for ways to help make it happen. That's the whole point of the IEP anyway!
  6. When will the IEP be reviewed? This is a tough question. The State of Utah requires a review at least once a year to once every three years, but the review frequency is really up to the team, and that means you. If you are seeing significant improvement in your child's performance, or have any questions that need to get addressed, don't hesitate to contact your team members and request a review.

So now that you have gone through the IEP process, you are going to be asked to sign the IEP. Most States see this signature as an agreement to the document. But not Utah. In Utah, it means that you were present at the IEP process, and even if you don't agree with the IEP you can sign without binding yourself or your child to it's goals and guidelines. I've never had an IEP for my child with which I didn't agree, so I don't worry about it too much. If you do have any concerns and you are not in Utah, don't sign until they are addressed. Of course, that also means that the IEP can be implemented without parental approval in Utah, but there are safeguards for that, though that is another post.

So that is the process of going through an IEP! I would like to thank the folks over at the Utah Parent Center for their presentation they offered to the few of us there on Thursday night! It was a great deal of useful information that was definitely needed.

I'll post more on the IEP process later, so as to not overwhelm you all with posts.

Wednesday, February 29, 2012

Autism: The Evaluation

Article first published as Autism: The Evaluation on Technorati.

Youngest son riding a zebra on the zoo Carousel
This was a day we had looked toward for a long time. Partly it was a day of anxiety, but mostly it was a day of vindication and relief. It was the day our youngest was evaluated for Autism.

Our oldest son has Autism, and since the youngest was born we had thought that he might have it as well. But things were different, because he was more free with eye contact than our oldest, and more likely to interact and smile with you. What he didn't get was speech. Sure, he could talk a little bit, and learned a couple of phrases, but he was also losing words and the phrases didn't make any sense. We therefore were concerned that his behaviors were not learned from his older brother, but rather inherent.

The Granite School District in Salt Lake City, Utah, is the largest school district in the State. As such, they have a lot of funding that they can then apply to evaluation and special education. It was at their offices our evaluation took place. We took Scott in after getting Jonathan on the bus for school, and started with filling out paperwork. The speech pathologist came in and started working with Scott as we ran through behavior ratings. The school psychologist observed him here with us, as well as tried to give our son an intelligence test. It didn't work, however, as our son refused to attend at all to the task at hand.

We then filled out an Autism Spectrum survey, as the psychologist saw the signs we had believed we saw. The survey was pretty straightforward, without a lot of detail to cover with which we were not already familiar because of our oldest son. After two hours of evaluations and questions about Scott's behavior at home and at school, they took the evaluations back and started to score them. It took about an hour or so, during which I got to play with my son up and down the hall in a quiet section of the offices. We then had Scott's hearing tested (just to be sure), and they returned with their evaluations.

For some reason school officials seem reluctant to use the word Autism. Perhaps they are concerned that parents will get defensive, offended, or otherwise be annoyed. Whatever the reason, they talked about why it wasn't just a developmental delay, or other mitigating circumstance that would cause his behavior, and decided to classify his educational stance as "Autism". They made it clear that it wasn't a medical, official diagnosis, as it doesn't clearly outline where on the spectrum he sits.

Honestly, it doesn't matter much, because the ABA techniques work across the spectrum, and we could tell what his level of comprehension is on the various subjects. Nope, we just wanted to hear that our son would be benefiting from the special needs IEP (Individual Education Profile) that would guide his education, and have access to the right kind of environment to best help him. And that we got.

Getting evaluated for Autism is primarily a Parental evaluation, as parents fill out the forms that describe their child's behavior. But, of course, they have the teaching staff of the child's class fill out the same forms so as to evaluate behavior in the classroom. Why? Well, sometimes children will behave differently in class than at home. This way the psychologist can get a more clear picture. It also provides vindication for both sides of the evaluation, as most often the evaluations are very close in their results. We did show more development in our results than did the teacher's, but our son does perform much better at home in a familiar environment than at school.

So that is that then. We currently have two children on the Autism Spectrum, our only two children. I started thinking about that: both my children have Autism. Both my children are special needs, and will have a rough go of it when they get into school. Sure, they may have their own classes, but I can just imagine how some of their peers will behave as they get older. It's a whole new dynamic, as athletics are going to take a back seat to behavioral analysis and occupational therapy. Speech therapy will take the place of things like the Drama club. The only thing I hold out hope on is music for my children. But even that is not a guarantee.

You would think I would be depressed, angry, or hurt. But the thing is, I'm not. I already suspected that my youngest had Autism, and as both my wife and I have no idea how to raise a child other than on the spectrum, it actually simplifies the home dynamic for us. We don't have to worry about a child feeling alienated because we spend more time focusing on one son's behavior than on the other. It also means that we can work with them both at the same level (essentially), even though they are 4 years apart in age. I'm already used to having a child with Autism, it holds no fear for me.

If you have any questions about whether or not your child has Autism, I suggest you get them tested. Most school districts that have a school child psychologist will have the necessary test procedures in place, and should be willing to do it for you. If your school doesn't have the resources (and some less well funded schools will not), then check with your child's doctor and see what they would recommend. If you catch it early enough then Autism can be treated. But if you wait too long, it becomes exponentially more difficult to change their behavior.

Friday, February 03, 2012

Review: Treehouse Training and Badges

Having finished all the available badges on the Treamtreehouse.com website, I thought I would provide an evaluation of what I thought of the website, the learning method, and the delivery.

Website

The website is very well put together, even though there is a feeling of "start-up" on the site. The feeling comes from the three badges (as of this writing) that are incomplete (JavaScript Foundations, Photoshop Foundations, and Ruby Foundations). Also, there is generally a delay in getting to certain pages (like the Profile and Dashboard). When you take the quizzes to get your badge, occasionally some will blank out for no obvious reason, meaning you need to go through the questions again.

But the organization is very well done. It's easy to navigate through the course materials, from one badge to another, and the Dashboard makes it easy to follow up on what your next badges would be. Over all, I really like the website.

Learning Method

The badges are organized by topic, which build upon each other to show which skills you have accomplished. You know you have accomplished the skills, because most badges have challenges and final challenges that require you to show your knowledge by accomplishing a task. It's well built, and equates to a classroom Topic then Quiz learning method to establish skills. I've mentioned the incredible motivating factor that comes from earning a badge.

Straight video lectures with demos are not for everyone. They are great for those who learn in a visual and/or auditory, but those who are tactile in their learning (needing to get hands-on) will find the speed of the videos a little frustrating. Another frustration I experienced was the number of videos or length of videos that will precede a quiz. It requires the student to retain a lot of information. Without more practice for each video, quizzes can get frustrating. In particular I'm thinking about the Introduction to Programming badge and the iOS 4 badge.  Both badges had videos that lasted 11+ minutes, and had several in succession, making it harder to retain information for the quiz.  And I find that it's the test that helps you learn more than just the lecture.

Overall, I think this is a great way to learn. Video lectures can work well when quizzes are appropriately spaced, and most of these badges do really well.

Delivery

I found the most effective learning experiences with Treehouse were those that had videos lasting no more than 7 minutes, badges (modules) that had no more than 4 or 5 videos, and challenges that preceded a small selection of modules. From there the retention was optimal, while also giving me plenty of content on which to work.

Conclusion

Overall, I would definitely recommend using Treehouse, or any similar badge-based learning method. The motivation you get from earning badges that build into more badges is intense, the ability to show your knowledge in such a clear cut form is refreshing, and the knowledge that you know what you know is even better. Overall, badges are looking like a very viable new way to qualify learning at an incremental level.

Thursday, January 26, 2012

Badges: Motivating Education

For many years Education has had a big problem:  It's been seen as being boring, tiring, and a chore.  Since the days of "No more Teachers, no more books" to the "Hey Teacher, Leave them Kids Alone", people have been complaining about education.  Everyone from parents to teachers have been looking for some way to make education fun again.  And it seems something has grown from the video game world that can help: badges. 

Badges are, essentially, minor accomplishment trophies, showing a mastery of a skill.  Unlike the old "Gold Star on Forehead" methods used by teachers to reward correct answers, badges can be linked directly to a single skill (or series of skills). Video games use them as a way to modivate the player to continue to play the game by giving them something to work toward that takes perhaps less than 15 to 30 minutes.  Before long, you have a player that has spent hours playing a game just to get a virtual award and feel accomplished.  While many parents have seen these accomplishments as hollow, educators have seen them as a way to keep students interested in learning. 

I have to admit, I was skeptical at first when I saw a number of institutions that apply them.  How can you be sure they show a level of accomplishment?  What is the standard of measurement?  How is the badge a sign of a quality of education, and show a quantative, measured result?  Well, the only way to know for sure would be to test it out.  I found a website, TeamTreeHouse.com, that provided training videos that built the student up with a number of badges.  The rates were reasonable for registration, so I signed up to see what it was like.  

They (currently) have three main badges:  Web Design, Web Development, and iOS 4 Programming.  Looking at the number of videos, the length of each video, I figured if I booked through them I might be able to finish the whole training regime within a month, so I selected every badge path they had.  Then I started on the first badge, which was an Introduction to HTML.  As a learner, you watch a series of short videos (the longest was almost 20 minutes, the shortest was less then 2), and then at the end take a quiz to see how much you learned.  After answering five consecutive questions correctly, you are awarded the "minor" badge, and move on to the next.  After accomplishing all the minor badges in the HTML badge set, you are awarded the HTML badge, and so move on to the next set.  After completing all the Web Design badges, you are awarded the Web Design "super" badge.  

Once I saw how it worked, I was impressed.  Evaluation of student knowledge is critical to learning, both before they start to learn, and after.  By using this method of taking a quiz at any time during the badge sessions, the student can evaluate how much they already know about a given topic, and how much more they need to know.  For online learning, this is great, because students have a way to self-evaluate when they need more instruction, how much instruction, and get instruction on targeted skills they seek. Also, as an added bonus, badges show everyone involved in the person's education from the teacher, to the parent, to the student, and even to a potential employer, what skills they truly have beyond having "taken a class".  They may be minor accomplishments, but they represent real skills that have been acquired. 

There is a caveat to this though:  with the automated testing on TeamTreeHouse.com it is possible to continue to try answering questions until you get them right, as the questions repeat from a relatively small subset of questions.  Of course that can be easily remedied by having either a larger question set, a limited amount of time to take the quiz, or both.  Personally I don't think it's too terrible, as even by answering a question wrong it forces you to rethink the answer, and that in and of itself is learning.  

So what about our guilded halls of learning in education, both K-12 and Higher Ed?  How can this be implemented?  Well, it would be both very easy (at least in concept), and extremely complex (in execution).  Most educators have already built a well-ordered lesson plan that breaks down into topics, skills, knowledge, etc. that would directly relate to badges, both minor and regular badges.  Continue to collect them, and you get a certificate with all your accomplishment badges, detailing the skills you have learned while studying.  The real problem would be keeping track of these badges.  An easy way would be to offer quizzes and assign them as each quiz is passed.  But someone would need to manage the badge accomplishments, and provide a way to make them "puiblic", either by having physical badges or digital badges.  

The logistics of the badge question can be worked out, but it will take time to apply it to traditional education.  In the mean time, to illustrate just how addicting learning by badges can be, I started the task of completing all 66 available badges on the site (as of this writing) on Monday and I have just 10 more to go.  It is definitely taking less than the month I thought it would take, and that for me is reason enough to take education with badges seriously.  If you would like to see what these badges look like, you can view my profile.  This is just one very exciting thing I can see coming up for educating a connected generation.  What do you think?

Friday, November 18, 2011

Schools, Teachers, Autism: Working with the Specialists

Boy with Autism writing on a magnetic tablet.
Article first published as Schools, Teachers, Autism: Working with the Specialists on Technorati.

This week we had our second (and my first) parent-teacher conference with my son's first grade teacher.  She just started, has a Master's degree in Special Education, and is very excited to be working with her group of students.  But this year, so far, she has been struggling with my son.  That struggle has not been because of his inability to learn, but rather her struggle is trying to find ways to connect with him and teach him.  

We discussed how we work with him at home, and what they see as a barrier in my son's development.  It seems that he is highly visual and tactile, and needs a lot of deep pressure stimulation to calm down enough to perform in class.  We talked about strategies for working with him, ideas that would be tried over the next couple of days, and what we can do at home to help him focus and work on learning.  

In the past I had talked about how I get defensive about my son and the work we do with him at home.  But it took a good talk with his Kindergarten teacher and the school psychologist (who tested his IQ and was frustrated, because there was no way to more accurately test him until he is more verbal) to understand that they were there to help us help them.  They were the experts in special education, behavior techniques, and tools necessary to teach him, but needed us as parents to use their methods to reinforce the lessons.  It seems odd to say this, as I teach for a living, but we as parents always want to "know what's best" for our children.  And sometimes, we don't. 

Perhaps that is why so many parents are now quick to blame teachers and schools for their children's failures.  Instead of working with the teacher, they fight them for "judging" their child.  It's frustrating for teachers, coddles children into thinking they don't have to work if they just make a big enough stink about every little grade, and parents are teaching their children that being a bully will get you what you want in the short term.  

So what can we, as parents, do to help our children develop and learn?  Something I learned from my parents, you go to the parent teacher conferences with a goal:  learn what you can do at home to encourage learning.  It's more than just forcing your children to do homework.  It requires discussion about the topics, making games that reinforce learning concepts, and instilling a desire to read.

When we came back from our consulation, we came back with specific goals: 

  • Work on writing, spelling, and spacing
  • Work on addition (mainstream 1st grader skill)
  • Work on sorting into categories and groups
  • Work on relationship between verbs and their concepts
  • Practice sharing and taking turns
  • Practice coloring
  • Find a deep pressure sensory solution to help him focus

Some of these skills may seem pretty basic for children in first grade, but they are common problems children with Autism have.  But the one thing that got me excited is the fact that my son is getting to the point of being mainstreamed in at least math.  It will make his uncle proud, I'm sure, and it thrills me to know that he is focused on learning as much as he can.  And with our take-aways from the meeting, we have a way forward to help him.  

Autism is a scary business, particularly if you are doing it alone.  Having the support of your child's teacher and the school staff is something you definitely need.  Add into that a supportive family and, if possible, religious or social community, and you can see dramatic changes in your child's development.

Monday, June 25, 2007

Reliability and Validity within Assessment: Reaction

Reliability and validity within assessment, as well as all parts of education, is necessary in order to make the results of that educational work appeal to peers and those requesting the work done. Without reliability or validity in the work, the results of that work become useless. But in order to understand that impact of both aspects independently, it is necessary to understand the terms clearly.

Reliability
Reliability has been defined differently depending on the experts that have been consulted. Baer defined reliability as “the degree to which two observers viewing the same behavior at the same time agree on its occurrence and nonoccurrence” (Gresham, 2003). This means that in order to truly have a reliable result, it would need to be recognized by at least more than one observer of the same result at the same time. As a definition, this is perhaps the most widely accepted of applied behavior analysis, and remains so to this day (Gresham, 2003).

Johnston and Pennypacker defined the results very differently, as “the consistency with which measure of behavior yield the same results” (Gresham, 2003). This applies to the consistency of results based on the same behavior, and is perhaps more applicable for individual experiments and observations. This differs, because the first definition by Baer doesn’t take into consideration individual bias of the observer during the behavior observation. Therefore, in an educational environment, one teacher can see the results of a student’s behavior as being completely different than another teacher’s observation, even though it is the same behavior being observed. This definition provides for the actual results of the behavior, and not the interpretation of the behavior that produces that result.

Validity
Validity does not rely on hypothetical constructs for description, but on actual results (Gresham, 2003). According to Johnston and Pennypacker “if the behavior under study is directly measured, no question about validity exists” (Gresham, 2003). This leaves only indirectly measured results to be given validity, which is therefore validated by directly measured results. Of course, this view assumes that direct methods of measurement do not contain large amounts of error (Gresham, 2003). This of course does present a problem for our definition of validity, when the question of error is brought before us.

In answer to that problem, many behavior analysts consider the concept of accuracy to be much more important than validity (Gresham, 2003). While validity measures the results, accuracy measures the degree in which the results reflect the true state that the analysis is meant to measure (Gresham, 2003). This of course calls into consideration the content of the analysis that is being measured, and its reflection on the true state that is being measured.

In answer to that, it seems that content validity has become more relevant than any other types of validity (Gresham, 2003). Linehan has argued assessment procedures need to be focused on actual representative sampling before validity can be given to the results (Gresham, 2003). Others feel that multiple sources of results, as well as measures, provide validity to the overall assessment, as it can give a more complete picture of the analysis to be evaluated (Henderson-Montero et al., 2003). Both provide a more comprehensive understanding of the results through valid content, allowing for acceptable statistical error.

Reliability and Validity in Application
Now that we have a general feel for what we want in our assessments, how can we apply this knowledge to an actual assessment situation? Lane and Ziviani (2003) managed to address these particular points in their assessments of children’s mouse proficiency.

The first step was to determine what exactly the results were that they were looking for. This was particularly difficult, since in many areas their assessment was breaking new ground in this field. What they were looking for were measurable results that could be gathered through computer interaction using only the mouse. In order to provide variability in the testing scenario, they tested their subjects one week apart for each case. They then pooled the results that were measured for a more accurate assessment, as opposed to assessing each group individually. They then used standard measurement procedures and algorithms to allow for a standard that their peers could relate to when the findings were published.

In order to assess the reliability of the assessment, Lane and Ziviani conducted additional studies other than the initial one, from various pools. This provided for more accurate measurements of the results, and provides reliability based on Johnston and Pennypackers definition of reliability with regards to results (Gresham, 2003). They also tested in environments that were mutually available, convenient, and comfortable for those being valuated. This allows for a more accurate measurement.

In order to provide validity to the results, two aspects were considered: construct-related validity and criterion-related validity. Both focus on Linehan’s definition of representative sampling as a source of content validity (Gresham, 2003), and are used to validate their findings based on how valid the actual measurements would be.

With construct-related validity, Lane and Ziviani focused on the ability to complete aiming, tracking, drawing, and target selection tasks with maximum speed and efficiency (Lane, Ziviani, 2003). This provides a clear idea as to what is being assessed, and how the results should be measured. Therefore, the actual measurements should not be effected by content that contains unpredictable errors (Gresham, 2003). Criterion-based validity focused on the specifically on the predictability of the results based on a coefficient of 0.5, which is pretty standard for similar assessments (Lane, Ziviani, 2003). This also provides validity, as the criteria are made valid with the expected results reaching the predictable mean in the statistical review.

Conclusion
And so we see that once a definition of reliability and validity are reached, and our understanding of those terms are firmly set in the assessment, the assessment itself can provide valid results that are reliable within statistical means. The actual definitions that you select determine the direction of your assessment, as well as the general validity and reliability as seen by your peers.




Resources
Lane, Alison, and Ziviani, Jenny, Assessing Children’s Competence in Computer Interactions: Preliminary Reliability and Validity of the Test of Mouse Proficiency, OTJR, Winter 2003. Vol. 23, Iss. 1; pg. 18

Gresham, Frank M., Establishing the Technical Adequacy of Functional Behavioral Assessment: Conceptual and Measurement Challenges, Behavioral Disorders, Tempe: May 2003 Vol. 28, Iss. 3; pg. 282

Henderson-Montero, Dianne, Julian, Marc W., Yen, Wendy M. Multiple Measures: Alternative Design and Analysis Models, Educational Measurement, Issues, and Practice Washington, Summer 2003 Vol. 22, Iss. 2; pg. 7

Tuesday, June 19, 2007

Criteria and Standards

With every assessment that is given, there needs to be a specific set of goals behind that assessment to make the results become meaningful and useful. Without those standards and recognized criteria, an assessment cannot be an accurate measurement of the abilities or skills possessed by the learner. While many instructors and students will spend most of their time focusing on the results, we as potential instructors would need to recognize the methods used to develop the standards by which results are measured.

The need of Standards
As previously stated, standards are required to make an accurate picture of the skill level of a learner through the results of an assessment. The actual assessment method is not necessarily important, as long as it can accurately show the performance of an individual with regards to a specific skill set.

The first order would be to define the standards that are to be identified by the assessment, and how performance indicators for these standards should be adapted to the target student population (Browder, 2003). The criteria being set need to be standard across the board, so that accurate results can be measured. Once set, various assessment methods can be applied to measure those particular performance requirements.

The example given by Browder would be methods of assessing the performance and skill level of disabled students. In this situation, passing out milk to other classmates in the morning can address standards in listening, speaking, number operations, and problem solving (Browder, 2003). This same can be said with a learning team within the University of Phoenix. Team behavior can be used to assess organization skills, team-building abilities, leadership qualities, and teamwork skills. The assessments in both examples are not standard written assessments, but yet have the same qualitative properties, if the criteria being measured are taken in context of the demonstrated skills in each activity.

With the understanding that standards for several criteria are being set and need to be reached with each assessment, it becomes necessary to define the criteria to the student. Otherwise the student will tend to become unaware of the standards they are required to reach, and thereby left to imagine their own requirements, right or wrong (Hinett, 1997). This can lead to misunderstandings that inhibit the student’s ability to perform under ideal conditions for proper assessment.

This is where rubrics become important to students. They define the standards that are required, and outline what criteria are assessed in the learning environment. This initial rubric can take the form of a complete course outline with grade expectations and assessment points that will be looked at, or it can be a simple set of instructions and rules to follow during the assessment. At each level certain standards are required identified and presented to the learners for clarification and guidance to what is expected of them.

Developing Standards
Now that we understand why standards are important and how they are implemented in an evaluation environment, it is now necessary to understand how such standards are developed. Black and Duhon (2003) identify a clear way to develop standard requirements and grant validity to assessment findings. They identify valid criteria as results to the extent which scores on the test are correlated with other variables that the instructing institution expects for associated test performance (Black, 2003). This method is generally developed as the results of previous experience of the educational institution with similar student reactions. Once the school has identified the standard they wish to set through their experience, they can then compare their findings with those of other schools with similar demographics. This presents an industry standard that is expected for all schools to reach. But suppose a new concept, technology, skill, or process is developed? How is one to identify a correct method of measurement that has the potential of standardizing criteria being assessed?

The method can easily be identified by first identifying the criteria of the assessment itself. Is there a skill that should be identified, and if so, how can it be measured? Once that concept is identified, similar methods can be used as control comparisons. The example that Black and Duhon use relates to the performance of business majors on the new Educational Service’s Test (ETS) Major Field Test in Business. The goal was to see how accurate common methods of assessing achievement would measure up to the ETS.
The criteria
The students being tested were being organized with the following criteria in mind:
1. GPA (both in Business specific courses, and overall)
2. ACT/SAT scores (Both accumulative and English/Math only)
3. Age difference
4. Gender
5. Major emphasis

The results
Once the material was gathered the following results were gathered:
1. For each Business GPA point increase, the average ETS score was 7.49 points higher.
2. For each Accumulative ACT score point increase, an average of 1.51 points increase was found for the ETS.
3. For every year increase in age, an increase of 0.71 points was average on the ETS.
4. As for Gender, males tended to score 3.79 points over women.
5. In respect to a major emphasis, those majoring in Management tended to score 3.57 points lower than all other majors, once all other criteria had been controlled.

Once the statistics have been gathered, it is now important to understand how they are significant. If there is a high correlation (+/- 0.70) between any pair of independent individuals, it indicates a statistical mean, with it’s corresponding distortions (Black, 2003). Once a mean or “collinearity” has been reached, it represents a valid, measurement that can be used as a standard towards additional results. It identifies the statistical predictions of where students will generally score based on previous experience, skill exposure, and educational background. Once that standard can be reliably measured, assessments become equally reliable.

Conclusion
So, through identifying statistical trends in scoring results, as well as the criteria that should be measured, evaluations and assessments can be used as a reliable tool for instructors to see what requirements need to be met in order to produce the best results in education. Students are also able to realize those requirements by following defining tools such as rubrics that are presented to guide them through their educational aims. It keeps them mindful of the standards required by the educational institution, and thereby keeps them focused on the skills that the course is supposed to teach them.



Resources
Hinett, Karen Review Symposium: Enhancing Learning through Self-assessment, Assessment in Education, Vol. 4, No. 2, 1997, p. 321

Browder, Diane; Spooner, Fred; Algozzine, Robert; Ahlgrim-Delzell, Lynn; Flowers, Claudia; Karvonen, Meagan What We Know And Need to Know About Alternative Assessment, Exceptional Children, Fall 2003, Vol. 70, Iss. 1, p. 45

Black, H. Tyrone; Duhon, David L. Evaluating and Improving Student Achievement in Business Programs: The Effective Use of Standardized Assessment Tests, Journal of Education for Business, Washington, Nov/Dec 2003. Vol. 79, Iss. 2, pg. 90

Friday, December 15, 2006

Evaluation: How You Know The Work Was Worth While

Your adrenaline is moving out of your system now, the class is over. How did you do? Most trainers can "sense" a general feel in the presentation and participation from the learners, but what were they really thinking? Did they get it? Can they do their job better now than they could before they started the training?

If you are a trainer, chances are you were hired for a specific job: making sure learners work better/faster/smarter. As with any other job, chances are your boss will want a full accounting of your performance in this area. How can you prove that you have accomplished your goal in a way that's measureable, and easy to understand? You do this through evaluation.

There are a number of ways you can evaluate the success of your training, depending on how much time you have to prove your worth to the company. There are the direct, timely methods, and there are indirect methods as well. Let's take a look at them both, and see which is best for you.

Direct Evaluation Methods
These are commonly called "Tests", "Assessments", and "Surveys". Basically, you check to see how well the learner had performed at the beginning of the course, give them quick tests in the middle of the course to see if they understand each of the modules you are presenting, and then have a final exam that tests overall comprehension. This is probably the most traditional method of evaluation, and everyone is pretty much familar with it. But it only looks at a small snap-shot of the learner's abilities. You don't know if the targeted skills are going to be applied.

A real bonus from this method, particularly from the survey, is that you can get a feel for your development and implementation of the course. How did it appeal to your learners? How are you doing as a presenter? There are a number of things that you can learn that will add to your ADDIE development through this method, outside of just whether or not the analysis was correct.

Indirect Evaluation Methods
Indirect evaluation methods would include monitoring employee performance over a long period of time, focus on overall numbers and how they relate to the skill that needed to be taught. Is there an improvement? Did it warrant the devotion of resources?

For those who are familar with any type of research, this should be nothing new. Researching the results of a change is part of what analysts do, and makes them so valuable to companies (mostly because it's so boring no one else wants to do it ^_^). But what do you analyze? Focus on the results as compared to your initial needs analysis. Did the numbers you focused on for your initial analysis change? Did they change for the better? Where there other factors involved that were not initially recognized?

For those trainers that are caught in the political arena within your company and were forced to create a training program to compensate for non-skill related issues, this is a perfect time to emphasize that while the skill became better known, the outcome did not improve because of the x and y factors. If you provide the information in a scientific way, showing that even though the training was a success the solution failed to be realized, the management will often concede, or let you go, which would also be an acceptable alternative. Who wants to be blamed for someone else's incompetence?

Seriously though, it's a good method to see how effective your training was, your analysis was, and how well each of the learners assimilated the information. You learn how well things are going, how you can improve your teaching style, and therefore increase your effectiveness as an instructor. A success here will validate your work, give you a great promotion, raise, and a chance to win a free 2 week vacation in the Bahamas! ^_^

When to Use Your Evaluation Style
Neither evaluation method is perfect on it's own, so combining both is essential for a full view into how well you are doing. Use a quick assessment at the beginning of the course to find out where your learners are (if that is in question). Once you know, have them keep their scores for future comparisons and self-evaluation. Also have an after-class evaluation that is done anonymously away from the classroom environment. This way the instructor doesn't have a presence to influence the outcome of the evaluation.

Then, send two more evaluations, once after 3 weeks, and one after 2 months. This way you can find out how well the content is remembered, and what the percentage of recall is for the learners. This is good long term data to be gathering. And finally, spend some time doing indirect evaluations by checking performance numbers. Of course, this assumes that you have access to the information. If you don't, you may want to provide a quick spreadsheet to the company that contracted your services so that they can provide the final data to you. They can leave out any information that may be proprietary and still provide enough information to let you know if you have been successful in your endeavors.

So, that finishes this segment of the ADDIE program. I may post some additional information on the subject, but for now, I wish all of you good luck in your training development!

Monday, December 11, 2006

Analysis Day 3: The Objectives

Now that the main body of the work has been accomplished, we need to identify the objectives. This outlines specific goals for the training session. What do we need to specifically accomplish? Well, let's find out.

The Learning Objective
The learning objective outlines the problem, the results, the environment and conditions for success, and the resources available for success. What's more, this is all in one sentence, so the use of commas are encouraged. It's through these objectives that your overall success can be measured by, and therefore how to determine if the training was worthwhile. This doesn't include evaluation methods, though they are closely related to this process. We will cover that in a later section.

So it is important that we identify what exactly identifies success. This is defined by the Input (problem) and the Output (results). The Input presents the issues that the training module is going to address, and is generally linked to the inability to perform the task at hand.

The Output outlines the ability to perform the task within the set measureable guidelines required by the training. These can be satisfaction, performance, productivity, or safety guidelines. Just anything that measures success for the learner.

Following that, the Aids (resources) and Conditions need to be recognized. Aids identify the experience or needs that the learner requires to perform the task. For example, an aid would be a diagram showing how to insert the key into the ignition. In other words, it can be reference material, access to support staff, and anything else that can assist with the performance of the task.

The Conditions outline the limiting factors within the performance of the job. If an Internet connection is required, and may not be 100% reliable, that needs to be taken into account. If access to the key locker is necessary, that needs to be taken into account. Basically, all factors not related to knowledge and skill are outlined here in order to set a reasonable expectation. If someone doesn't have the correct tools, you can expect them to perform the task.

Once the sections have been outlined (I do this in the Task analysis document, directly above the inserted table), the objective can be created. So let's outline the sections!

The Input and Output
First we need to define what the problem is for this task going into training, and what we expect to get out of it. For instance, if we start the training with the idea that our taxi driver doesn't know how to start the car, we would assume that after the end of this module the driver can now start the car while meeting all performance guildelines. That is an example of input and output. Here is how you can write it:

INPUT: The driver is unable to use the key to start the ignition.
OUTPUT: The driver is now able to start the ignition using the automobile's key to the extent that customer satisfaction and proper use guidelines have been met.

So what do we have here? We have the beginning and the end of the learning objective! That's right, we can actually copy and paste this into the learning objective, which saves a lot of typing. Finally! A short cut!

Aids and Conditions
I format my aids and conditions in a similar manner. While continuing with the example:

AIDS: Access to automobile manual, keys to the vehicle, and support staff.
CONDITIONS: Assuming the vehicle is in good maintenance, the driver is already licensed, and is familiar with the vehicle in question.

Here we have the center portion of the learning objective. Again, we can copy and paste this directly into the objective, which will save us a lot of heartache (and sore hand joints) in the long run. But how does it all go together?

Putting It All Together
It's time to look at the whole application of this work. Here is how it should look in your Task analysis document:

Task A: Starting The Vehicle
INPUT: The driver is unable to use the key to start the ignition.
OUTPUT: The driver is now able to start the ignition using the automobile's key to the extent that customer satisfaction and proper use guidelines have been met.
AIDS: Access to automobile manual, keys to the vehicle, and support staff.
CONDITIONS: Assuming the vehicle is in good maintenance, the driver is already licensed, and is familiar with the vehicle in question.

So we have our task, we have the problem stated, the expectations, the resources, and the conditions that are outlined. So let's put it togeter into the Learning Objective!

The learning objective would be formatted this way:

Given [input] and [conditions] with [aids], the learner will be able to [output].

For our example, it would look like this:
Given the driver is unable to use the key to start the ignition and assuming the vehicle is in good maintenance, the driver is already licensed, and is familiar with the vehicle in question with access to automobile manual, keys to the vehicle, and support staff, the learner will be able to start the ignition using the automobile's key to the extent that customer satisfaction and proper use guidelines have been met.

Yes, it's one long sentence and it's probably not formatted correct grammatically, but it outlines each of the important steps in the training process. We finally have an objective that is specific enough to keep us on topic while developing and designing the material. But before we can get that done, we have one final step: The Assessment Methods.

Assessment or Testing
Yes, you need to test your learners in some manner to be sure they are learning what you are trying to teach them. No matter how well you think you are doing, chances are you have lost someone that is too afraid to speak up. And if you have lost one person, you probably have a few others that are just barely keeping up. Assessments are necessary in determining their success, and whether or not you are teaching properly. If you lose a lot of students, it's time to rethink your approach.

In order to evaluate someone's abilities in the most efficient manner, the best thing is to create an environment as close to the actual performing environment as possible. No matter what other instructors (or even professors) may think, Multiple Choice doesn't do this. The real evaluation method is in practice.

That being said, if it's not possible, or economically feasible, to do so then alternative assessment methods can be used. After all, all certification classes (with a few exceptions) are multiple choice tests. This posting isn't meant to be a discussion on the virtues of each evaluation method, so you need to decide what is best for you.

When it comes to your Assessment methods, I would have at least three options selected: one for Tactile learners (hands on), one for Auditory learners (written exams), and one for Visual learners (presentations). This gives you a general pool to pull from while designing the course, and will give you a lot of flexibility on future implementations. For instance, when I created training curriculum for a certain company I worked for previously, I outlined assessment methods for both online and in class training. I envisioned a number of alternatives the instructor could implement, and therefore created possibliities for future development.

Putting it All Together
The Learning Objectives and testing methods I place on the same document, being separated from the Task analysis. This document is then used in conjunction with the task analysis to create the learning materials, and design methods for each training module. As a quick tip, if you notice that a lot of your material is the same for each task, use your copy and paste option. It's an ideal solution for sore fingers.

Finally, we have finished Analysis! The next session in the ADDIE series will be Development. Fortunately, it's not nearly as long as the analysis section, and a lot more fun!

Tuesday, December 05, 2006

Analysis Day 1: Determining Your Need

I am, by nature, an analyst. I love to analyze everything from complex learning strategies to the movie I'm sitting through. Yes, I can safely say that analysis is a big part of my life (to the chagrin of my wife). And as such, you would think that instructional analysis would be right up my street. Well, you would be right, but only when I take it in short bursts.

Instructional analysis comprises a strong 75% of my overall design process, because of the need to get every detail worked out. The details are often so minute that they can sometimes be missed through initial surveys. So I developed my own system that adopts many other systems I have been exposed to, but works best for me. That being said, please don't think this is the one size fits all scenario. The process itself may not work specifically for your situation, but the basic elements should apply everywhere.

What is the Problem?
The first step in any analysis process is to determine the problem. A problem would basically mean a need is not being met. In the corporate world, this generally means that a job is not being performed to the standard that is expected.

This doesn't mean that a job isn't being done in the way that is expected, but that the outcome of the job produces results that are not as expected. I want to be very clear on that point, as innovation can be throttled if a single process is the only process allowed.

Is "throttled" too strong of a word? Good! I want to impress in this posting that the job of training is not to produce conformity, but to instill a level of competence that allows the learner to not only do what is required, but find ways to do it more efficently. This, utlimately, is what makes a good employee: Someone that is able to innovate within their realm. It also makes for really good resume fodder.

Also important to note, I have found that many managers feel that training is the answer to everything. It's not. As I've mentioned before, you can't expect more knowledge to improve on poor management decisions. At best it insults the employees, and at worst it exposes the poor management style for what it is, ruin morale, and shorten the employment span of the employees.

Where Does the Problem Exist?
So, having established what it is the trainer is looking for, it's necessary for the trainer to focus on causes of the problem within the context that it happens. Often this means going right to the source: sit with the employees that are expected to benefit from this training. Does everyone experience the same problem? What do they know? What don't they know? What are they allowed to do? What are they not allowed to do? These are all really good questions to get you started.

The next step is to check with those that do not experience the problem, which generally are more senior members within the group. What makes them different? This is the key that will answer the problem riddle, and determine whether or not training is necessary. Are the senior employees more empowered? Do they have access to resources that others do not? Do they have more knowledge than those that continually run into the problem? Do they have any insight into what could be the problem? These questions should clarify where a problem can exist, or at what point the process fails. If it doesn't, continue up the chain until someone gives an idea of the expectation and you have enough information to identify the problem.

Houston, We Have the Problem. Now What?
Once you have identified the problem, it's time to identify the solution. What is the only problem that applies to training? "There is a lack of knowledge or skill that needs to be addressed." That's it. Not having the tools to work with doesn't get resolved through training, it gets resolved through new tools. Poor management decisions doesn't get resolved through employee training, it requires a better manager. Unclear expectations do not get more clear with training, they need to be clearly communicated by management to the employees.

While working for a previous internet company, I found a major problem. Employees didn't know what critical updates had been rolled to the site, and therefore couldn't support the users that had trouble with these new updates. What did management try to do? Give them more training. Did the employees need it? No! They knew how to resolve the issues, but they didn't know what changes were made, and hence could not prepare properly. This was a classic example of a communication failure within the company. Training cannot resolve this issue.

Also, with the same company, I found a new project that was being rolled to the site. This project was complex, and required a complete rethink of the entire process to utilize on the site. Does this require more communication from the developers? No, because I already had all the information, it just needed to be distributed to the rest of the company to teach the employees the new skill. This is an excellent example of what training is all about.

So determining the need itself can be a long and comprehensive process, but this is a necessary step in order to determine if training can actually resolve the issue. If not, you don't have to invest any more time into developing for training, and more time and resources into resolving the problem on another level.

Stay tuned for tomorrow's entry: Analysis Day 2: The Skill Assessment. Same Bat-time, same Bat-channel!