Friday, March 28, 2008

Task Analysis: Lost In Translation

The task analysis is the most important part of instructional design: but what if you have an overly complicated topic? For instance, suppose you have a language that you would like to teach (German, French, Perl, C++), and you want to break it down into usable, manageable chunks for instruction. How would you do it? The idea is daunting; so daunting in fact that many language instructors simply go to the established curriculum design from ages past, and go with it. While it has a proven track record, is it really the way to go?

The analysis should instead be hinged not on what could be taught, but what the end goal is based on. This is called the Problem-Solution document, or a clear declaration of what need you intend to meet and the degree that need will be met. So, with this in mind, the language goal isn't as daunting. Here is an example:

The Problem
Many people do not have a working knowledge of German, and therefore cannot function well within an all German-speaking environment.

The Solution
Provide the learner with basic grammar, vocabulary, and phrases to allow for productive interaction within a German-speaking environment, to the extent that the learner is able to understand basic requests and communicate same.

The above outlines a specific, measurable need: students need to understand basic requests when spoken to them, and communicate those same basic requests in a coherent fashion. Notice, a comprehensive discussion of the language is not required, nor a comprehensive history of the language. Just the basics that can be measured by success in both understanding and communicating with someone in a German environment. Already we have avoided the most common pitfall in most reference books that bill themselves as "learning tools". Learning materials are not course materials, and vice versa. There is a time for detail, and a time for simple instruction.

So, with the goal in mind, we can start our learning process on the following premise: 1) our learners need to understand some basic grammar, 2) our learners need to know come basic vocabulary, 3) our learners need to know how both grammar and vocabulary work together to create a comprehensible phrase. Then, the phrases can be tested to see if the comprehension of our students is where it should be. Now that we have our three "jobs" to accomplish within the course, we can move on to the various tasks that are required to understand the grammar, vocabulary, and phrase utility.

Often times you may hear (or read as the case may be) me complain about instructional books, primarily from the Tech industry. This is because the Tech industry (among others) has been inundated for years by experts that definitely know their stuff, but don't necessarily know how to help others learn. The popular RTFM (read the freaking manual) reply is far too prevalent today, and underlines the lack of willingness to put the effort into education. It may be "funny" to the one posting it, but to those searching out for help it becomes frustration. Either that, or it shows a lack of knowledge that is then hidden behind a mask of superiority. Which do you think comes to mind when I read such a response while looking for a solution?

This is why those in charge of training and education need to step up to the challenge. So many people out there need to understand how to use the new technologies, information, and tools that are available in order to better their own job (and consequently, the rest of the economy), and it is up to educators that are dedicated to developing proper instructional designs to convey this information. Whether it be in a book, online, or in a classroom, the techniques and task analyses are the same.

Wednesday, March 26, 2008

Trying to Understand Autism

Recently my 3 year old son was diagnosed with autism. We took him into a speech therapist after I finally convinced my wife and her family that his lack of speech at 3 was not normal and needed to be addressed. The speech therapist diagnosed his speech development at 11 months, and identified some common behaviors that were typical with autistic children.

At this point, my mind seemed to turn off. I was concerned for my son's chances of having autism, because my older brother has symptoms similar to autism. This brought about a lot of fear, anxiety, and almost hopelessness because I didn't know much about autism other than stories of severe cases.

The therapist reminded me that he couldn't diagnose autism, but gave us some options for speech development and recommended us to the University of Utah for his actual diagnosis. My wife cried as we went home, and I remained numb. We started that day to get our son into the system, which could (and does) take weeks to get to the end goal of getting help for him.

Since then we have met with a behavioral specialist, and I started doing some research online to understand autism. I checked out the National Autism Association website, which gave me a lot of good information.

Autism isn't genetic, though it does tend to run in families. The reason why they don't call it genetic is because researchers have yet to find the "autism gene" that would identify autistic characteristics. That, and the fact that autism diagnoses are more common, suggest that autism is environmentally triggered. The problem is, the trigger hasn't been found.

Some believe that autism is caused by immunizations due to a mercury-based preservatives, though it has yet to be proven. There are also a lot of similarities between mercury poisoning and autism, but again it hasn't been proven conclusively to be the link.

While speaking with the behavioral specialist, she noted that my son has a very mild form of autism that is effecting only his speech and interaction. This is because he has already started to write his own name (on his own, I might add), and has mastered many skills that other children his age are not commonly doing. His comprehension and problem-solving skills are impeccable, which really impressed the specialist.

So, now I no longer feel as afraid or concerned for my autistic son. He is scheduled for pre-school, where he will have his own teacher that will work only with him. They also figure that he will be fully main-streamed into the school system by the first or second grade. We are also going to work on his speech at home with a combination of pictures and American Sign Language, to help him better communicate his needs and wants.

And the most encouraging news so far: my son has started talking! He's speaking words more than once, and being more regular in his communication. While most parents with 3 year olds are complaining about the incessant questions and talking of their kids, my wife and I rejoice in every slurred word my son says more than just once. Our dream is to have our son rise to his full potential and overcome his disability.

I know that this isn't a normal post for my blog, but I want to reach out to any readers that have autistic children, or are concerned about their child's development. Autism covers a range of severity, and most autistic children tend to be exceptionally intelligent. They just have trouble communicating or interacting. If you have any questions or concerns regarding your child's development, get them tested as early as possible. The earlier they are tested and diagnosed, the better their chances are to halt and even reverse the symptoms.

Trying to Understand Autism

Recently my 3 year old son was diagnosed with autism. We took him into a speech therapist after I finally convinced my wife and her family that his lack of speech at 3 was not normal and needed to be addressed. The speech therapist diagnosed his speech development at 11 months, and identified some common behaviors that were typical with autistic children.

At this point, my mind seemed to turn off. I was concerned for my son's chances of having autism, because my older brother has symptoms similar to autism. This brought about a lot of fear, anxiety, and almost hopelessness because I didn't know much about autism other than stories of severe cases.

The therapist reminded me that he couldn't diagnose autism, but gave us some options for speech development and recommended us to the University of Utah for his actual diagnosis. My wife cried as we went home, and I remained numb. We started that day to get our son into the system, which could (and does) take weeks to get to the end goal of getting help for him.

Since then we have met with a behavioral specialist, and I started doing some research online to understand autism. I checked out the National Autism Association website, which gave me a lot of good information.

Autism isn't genetic, though it does tend to run in families. The reason why they don't call it genetic is because researchers have yet to find the "autism gene" that would identify autistic characteristics. That, and the fact that autism diagnoses are more common, suggest that autism is environmentally triggered. The problem is, the trigger hasn't been found.

Some believe that autism is caused by immunizations due to a mercury-based preservatives, though it has yet to be proven. There are also a lot of similarities between mercury poisoning and autism, but again it hasn't been proven conclusively to be the link.

While speaking with the behavioral specialist, she noted that my son has a very mild form of autism that is effecting only his speech and interaction. This is because he has already started to write his own name (on his own, I might add), and has mastered many skills that other children his age are not commonly doing. His comprehension and problem-solving skills are impeccable, which really impressed the specialist.

So, now I no longer feel as afraid or concerned for my autistic son. He is scheduled for pre-school, where he will have his own teacher that will work only with him. They also figure that he will be fully main-streamed into the school system by the first or second grade. We are also going to work on his speech at home with a combination of pictures and American Sign Language, to help him better communicate his needs and wants.

And the most encouraging news so far: my son has started talking! He's speaking words more than once, and being more regular in his communication. While most parents with 3 year olds are complaining about the incessant questions and talking of their kids, my wife and I rejoice in every slurred word my son says more than just once. Our dream is to have our son rise to his full potential and overcome his disability.

I know that this isn't a normal post for my blog, but I want to reach out to any readers that have autistic children, or are concerned about their child's development. Autism covers a range of severity, and most autistic children tend to be exceptionally intelligent. They just have trouble communicating or interacting. If you have any questions or concerns regarding your child's development, get them tested as early as possible. The earlier they are tested and diagnosed, the better their chances are to halt and even reverse the symptoms.

Friday, March 21, 2008

Cognitive Load: When Your Brain Is Full

We have all had that one experience, I'm sure. Sitting in a meeting or classroom, listening to a presentation or watching a demo, and the brain starts to wander. It can happen to different people at different times and have different initial factors, but the reason is the same: cognitive load has been reached.

The Effects of HIgh Cognitive Load
So what is Cognitive Load? Basically, it's the amount of working memory the brain uses to perform tasks. The more tasks you perform, the more cognitive load you heap on your brain.

Of course, there are levels of brain activity that utilize your working memory, and it differs based on the need for understanding. The less you need to assimilate within a given time, the less cognitive load you need to use. It seems like a simple concept, doesn't it? Focus on what you are doing, and eventually you will get there.

Unfortunately, in the world of Professional Training, this is a luxury you can't often afford. People are required to assimilate a lot of information in a very short amount of time. Because their cognitive load is really high, they are less likely to understand the topics that are being discussed. Consequently the learning experience is diminished or negative, leaving the learner confused and even a little scared.

An excellent example I saw was a student that had taken my Excel 2003 Level 1 course. Normally, this would be a very simple class to take, and I like the design because it takes Cognitive Load into consideration for the majority of students. This student, however, was new to Windows platform entirely. So not only was the student trying to understand the Excel interface, but also the Windows interface. The student locked up, and took my entire lunch time to start to feel more comfortable.

Fighting High Cognitive Load
So, what is a course designer to do? You have a high cognitive load requirement to finish the course material in as little time as possible. How can you be sure that your students are maximizing their understanding?

The first thing I would recommend is checking for those peak times when Cognitive Load becomes an issue. The number one cuplrit: Lunch. After lunch, the learner's blood rushes to their stomach to digest their food. As a consequence, the learner's brain has less blood to process information. I call this the Lunchtime Lull, which is best fought by having a less-taxing assignment or fun game that reinforces your principles for the students at a high level, letting their minds rest a bit.

Another problem is all lecture and no practice. While some learners are excellent at auditory learning, most need to apply the lecture at least once to cement the concept in their brain. The process of going through exercises fires additional synapses in the brain, thereby increasing the entries and imprints of the concept within the brain. Of course, the other result is a break between lectures and concepts. Once one principle has been assimilated and applied, the brain feels comfortable enough to move on to the next subject.

The last problem for Cognitive Load that I will point out is overwhelming media. In the world today we have several media outlets that allow access to just about every form of media, and it's not uncommon to see people try to deal with more than one media outlet at a time. How many students do you know do their homework while the TV is on? How about the radio, with non-instrumental music playing?

The brain isn't able to multi-task very well, and will latch on only one function at a time at a high level (processing information). All other functions at that point are placed within the low level functionality, running basically the same level as riding a bike or balancing. As a result, only one media type can be utilized to "learn". Any other media is either a distraction, or is being processed at a low level and not assimilated within the memory.

"But." I hear you say, "I have read a book and watched TV/listened to my favorite lyrics at the same time, and I get by just fine!" Ask yourself how many times you were thinking of the story within the show on TV or listening to the lyrics when you should have been reading. Happen very often? Probably. Another basic concept of the brain is that, just like most forces in nature, it will take the road of least resistance unless forced otherwise. That means that your "distractions" are just your brain telling you it's easier to process the distraction than the material in front of you.

How do you fight this? Well, the first and perhaps most difficult for the learner is to remove the distraction. Have the learners turn off their cell phones while in class, avoid texting someone when they should be working on an assignment, and not sit there and work on their My Space/Facebook page when in a lecture (it's a pet peeve of mine!). If they focus on their work and what they are learning with no distractions (including audio), their cognitive load is decreased.

Something else you can do is just increase your concentration. This is physically more difficult, because it requires a lower Cognitive Load on the material. Why? Because the student needs to increase their cognitive load to increase their concentration. Concentration requires a conscious commitment by the learner to block out any and all distractions. The fewer actual distractions in the room, the more likely it is a learner will be able to learn.

How Does This Affect Me?
We all assimilate information every day, whether it's conscious learning or basic observation. That means we all need to watch our Cognitive Load. Ever wonder why people vegetate in front of a TV? TV shows and commercials are geared to take most if not all of your cognitive load to keep your attention. That way you are less likely to change the channel, and more likely to increase their ratings and ad revenues.

So look at your own life, and see where your Cognitive Load is peaking. Is there a way to reduce it, so that you can maximize your learning and observation? You may be surprised how much more productive you will become if you don't try to multi-task quite so much.

Thursday, March 20, 2008

The Task Analysis Revisited

It's been a long time since I have posted anything on instructional development. But yesterday the power was out in my office for the entire day, giving me a chance to work from home. There I started to focus on the course development process again.

Learning To Cook: A Quick Task Analysis
Let's take a deceptively simple topic like learning to cook. There are a lot of self-help books out there, recipes, and even websites that give you some general information, and basically throw you into the deep end. Can it really be that simple to learn how to cook? I thought I would run a quick task analysis on the process to see what I could come up with.

I began by tearing down the process of cooking to several jobs. You have the basic understanding of the tools involved, the need for clean environments and fresh food, and the processing techniques to prepare the food. I then looked at the various methods of cooking, depending on the desired outcome. I then ordered them approximately based on the jobs that need to be understood in order to successfully complete the next job. This follows the Constructivist method of instruction, allowing for layered modules that build upon each other.

I ended up with the following breakdown:

1. Equipment
2. Measurements
3. Cleanliness
4. Preparing Food
5. Seasoning Food
6. Heating Food

As actual consumption of the food is not technically part of the cooking process, I left it out of the list. The idea being that once you are done, you can move on to the next job: eating food. ^_^

From these basic jobs expected from a cook, I can then break them down into individual tasks. For example Equipment could be broken down into these sections:
1.1 Cleaning Tools
1.2 Hand Tools
1.3 Preparation Tools
1.4 Cooking Surfaces
1.5 Measurement Tools

These tasks can be broken down even further into sub-tasks, which can be broken down still further into your skills and knowledge. From the eventual breakdown, you get a complete outline of your course, all from the analysis. At this point your instructor can fill in the actual lecture material themselves, either on the fly while teaching or (if in an online environment) with carefully structured lectures.

I could have been quoted as saying that a subject matter expert (SME) isn't necessarily a good teacher. That's because (among other things) a SME doesn't have a concept of task analysis. The second concept that a lot of SME's have trouble with is Cognitive Load, which will be a topic for another post.

If you are looking to teach anything, it's a good idea that you focus a lot of your time in analysis. Task Analysis can take a long time to complete, but in the end it makes the rest of the process a lot easier. You can also target specific portions of your analysis after you have evaluated course success.

Monday, March 17, 2008

Kerberos Issues with Open Directory 10.5? Here is a Sure-Fire Fix

I thought I would post this, as we had a similar situation within our class with this issue. At times, when you try to start Kerberos in Mac OS X 10.5 Server, the domain gives you trouble. The first thing you should do is check the host name with changeip, and determine the issue with your DNS. Then, you can fix your Kerberos issues with the following steps as found here on Apple's documents page:

1. Fix Your DNS: This is necessary, otherwise steps below will not work.

2. Fix your /etc/hosts file: Best done in Terminal. Run sudo bash and authenticate to get to root, and then run vi /etc/hosts. Once in there, add your server's IP Address and fully qualified domain name, like this: 10.1.0.1 mainserver.pretendco.com

3. Set your Host Name: This can be done as root with the following command: scutil --set HostName mainserver.pretendco.com. Replace the Mainserver entry with your own domain name in this step, and all subsequent steps you see.

4. Initialize Kerberos: This requires three steps (and being logged in as root):
slapconfig -kerberize diradmin MAINSERVER.PRETENDCO.COM (diradmin would be the directory admin login name)
sso_util configure -r MAINSERVER.PRETENDCO.COM -f /LDAPv3/127.0.0.1 -a diradmin -p diradmin_password -v 1 all (replace diradmin and diradmin_password with your directory admin and password)
sso_util configure -r MAINSERVER.PRETENDCO.COM -f /LDAPv3/127.0.0.1 -a diradmin -p diradmin_password -v 1 ldap

Once you finish these steps, reboot the machine, and check your Server Admin utility. You should see that you have all your services running on your Open Directory Master.

Even with this trouble, Kerberos seems really simple to set up with a Mac server. I've never tried it on a Linux server, but from the expressions on some friend's faces when I suggest it, it doesn't seem to be very simple. I'm not sure how it's implemented in Active Directory either, though I do know it's just as frustrating when it doesn't work.

Friday, March 14, 2008

Happy Pi Day!

I love March 14th. It's the perfect day to eat pie. Why? Because the date is 3.14, best celebrated at 1:59. And how do you celebrate Pi Day? Eat your favorite pie!

I'd like to challenge you all to find an unusual pie for today to try. Not up for sweets? Find a savory pie or pie recipe and share it! Meat pies have a long and outstanding history, and should be celebrated as well.

Enjoy!

Thursday, March 13, 2008

Gender Roles In Pre-History as Reflected in Food

Recently, I started to read History of Food by Maguelonne Toussaint-Samat. It's an anthropological look at the development of food from pre-history to the modern day, and how each aspect of our food production was added and then adapted into our society. For those of you who are interested in culinary anthropology, this book is definitely for you. I will be writing a full review once I finish the book.

The section I just finished covered hunting and gathering, and it focused on the gender roles of each. It seems that men had what modern society may consider the "exciting" jobs, and the women had a more safe and sheltered (comparatively) life. Was this because of some gender discrimination? Were women being oppressed?

The explanation was surprising, but not really unexpected if you are familiar with anthropology in general. If you haven't, let me break down how I understood it:

Women in Pre-History
Women in pre-history were not treated as objects or weaklings, but rather practically worshipped. This is because of their creative power, the ability to deliver children. As such, they were highly prized and protected from the dangerous elements of life in general. Men would guard them zealously, walking before them at all times. Why? Because if an animal were to attack, it would attack the first person that it can identify.

But women were not placed on the proverbial pedestal either. They worked, and worked hard. Women gathered the safer forms of food (i.e., vegetation and grains), providing the majority of the caloric and nutritional intake for the family. Because of the natural progression of things, women also began to sow these seeds close by, and began gardens. Gardens became farms, which then lead to a need for organization and societies. So, in a way, women were responsible for civilizing the world.

Men in Pre-History
So what did the men do? Well, because they didn't birth and suckle the children while they were young, they needed to be protectors. They began with hunting as a form of high-fat food stuffs, and gathered honey (with the exception of South America, where women would gather honey from sting-less bees). Because of the danger of both these endeavors, men were the natural choice. Their survival didn't effect the survival of their children at near the level of the survival of the mother.

Because hunting required organization at a surface level (no personal connections are really needed), men had a surface and superficial nature. They made and broke relationships freely, made alliances when it suited them, and dismissed them just as freely. This placed them in the perfect location for politics once woman had created the society.

An interesting side note is that religion was often credited for forcing men to hold more depth to their relationships and values, where they naturally would avoid. The concepts of honesty, honor, dedication, etc. are all value systems that women had naturally with their need for social child rearing, and men didn't have because it wasn't flexible enough to work well on the hunt.

Modern Days
The modern day has complicated the basic "pre-history" statement I have made, because the roles for men and women have changed substantially since pre-history. Societies have experimented with various systems of rights for both men and women, moral and religious systems, etc., and it seems that we are determined to keep the experiment going. But I find it fascinating that women were behind the agrarian revolution that allowed for societies and eventually civilization (greek version: i.e. city dwellers).

It's always intriguing to see the history of anything, because you get a more rich understanding of the current development of a society. One thing that my Anthropology professor would always say is that you can't ignore your past. Once something has developed, it can be changed or altered, but never removed completely.

Tuesday, March 11, 2008

iPod and iPhone Stands, and Why Make an iPod Slate

Normally I don't post links to products that I currently have no use for, but these were so cool on at least two levels, that I couldn't resist.

While reading my news articles for Apple rumors and potential products, I saw an ad for these iPod and iPhone stands. Normally I wouldn't care, but the design was pretty neat.

But there is another reason: Part of my hopeful release of a UMPC from Apple (the iPod Slate) would be to have a functional stand so I can use the machine as a regular computer. Tether a Bluetooth keyboard and mouse to it, and I'm set. The problem is, I can't see one being functionally added to the case design without making it out of plastic or lightweight thin metal, which will mean it will almost certainly break.

But I see that a third party has already been creating stands for the iPhone and iPod Touch (as well as other iPods), so Apple wouldn't need to design a stand into the case. This would simplify the design (right up Apple's street), and give a third party a great boost.

Does this mean that Apple is going to release the iPod Slate? No, not really. In fact, there is no evidence (outside of wild rumor and a few suggested insider speculation) that such a device will be made available in the near future. But, it's still a great idea, and I hope Apple does think of it.

Why should they even bother, though? What would make an iPod Slate more marketable than the iPod Touch, or even a MacBook Air? Well, let me list the reasons (assuming that my criteria and setup is met):

Presentations
Laptops and Notebooks have one huge disadvantage in the presentation arena: Their size. They are bulky, even the sub-notebooks. Why? Because of the screen. The clamshell design is great for working on projects, coding, and gaming, but gets in the way when trying to present to a group of people. It stands out, and because it stands out it can distract from the overall presentation.

Ultra Mobile PC's have the benefit of being a tablet, so they lay down flat. They also have the benefit of being lightweight and small, so it can be placed just about anywhere without being too visible. This leads to less distraction (after the initial "Wow! That's soo cool!"), and you can get down to the actual presentation.

Meetings
There was a time when I thought I could take my laptop into a meeting and take notes. I actually meant to on several occasions, but I didn't. Why? Because it was just too big and bulky. And I have a 12" PowerBook! Instead, I tried to take my PDA (Toshiba Pocket PC), and tried to take notes on that. I didn't have a keyboard, and so I was left to try to use the handwriting recognition or the screen keyboard: Neither worked very well. So that didn't work for me either.

But, enter in an Ultra Mobile PC with a bluetooth keyboard, and tether a Bluetooth Lazer Virtual keyboard to it, and just start typing. The UMPC can even remain in your portfolio (if you are good enough at typing), and never have to be present in the meeting. This takes up less space on the table, and no one is distracted by typing keys (just thumping fingers).

The key to this input device is size and portability. The PDA has been under scrutiny for years because of the size restrictions and processor limits. Voice recognition, Speech to Text, etc. that would make the PDA an almost perfect tool hasn't been available because of the loss of processing power in the interest of battery restraints.

But that's all changing, both with more efficient batteries on the way, and more powerful, but energy wise processors being developed. The PDA could become a full-blown PC without having the need for a huge power supply or fans.

General Computing
The typical computing experience in the past for the majority of those users out there is email, web, and Solitaire. You may laugh at this, but while working for Packard Bell NEC, I learned of hundreds of people that bought a thousand dollars worth of computer just so that they could play solitaire. Any PDA is able to fit this need, and do it well.

But the computer usage has become more specialized, and will continue to specialize. The need for a mobile computer that can do their task and do it well is growing, and the tasks are varied. Some people are convinced that having multiple devices that do their job well and interact is the way to go (i.e., the old UNIX model of programming). Unfortunately, devices are not small enough for that to be really possible. So, we need some devices that can multitask.

Enter in the UMPC. While it can technically take the place of an iPod for entertainment, it needs to do more. It needs to run software, provide presentation media, and allow for specialized software to be installed. It also needs to be reliable and very user friendly.

Why, you ask? Have you ever been in a doctor's office, and watched the doctor try to use a computer to enter in your medical information? Many doctors are moving to a more mobile environment where they can enter in information into a central database, and they need mobile PCs that can do it. PDA's just don't cut it because they are too difficult to enter information, or they are too underpowered to utilize a full program and have to use a watered down program with no features.

But Why Hasn't It Worked In The Past?
You may be saying at this point, "Yes, I get your points. But UMPC's have been around for years as a concept, but never taken off. Why should Apple even think about it?" And you would be right, the platform hasn't taken off quite yet. Some of it has to do with the mentality of the makers of the UMPC, but most of it has to do with the chosen software.

Mentality Killer
Other UMPC developers have had the same vision that I had: A full fledged PC that works like a PDA when needed. But, unfortunately, the whole PDA idea just stuck. The first problem is the idea of more is more: Chuck it full of holes for peripherals, and the gadgets will sell. Unfortunately, that doesn't work, and it drives up cost. So now you have a really expensive device that works like a PDA, and sacrifice size for the adapters.

A lot of PC commentators may argue with me on this, but I still maintain that most people don't utilize most of the peripheral slots that are available for their devices now. What do they use? Mostly USB, Wireless (Bluetooth and WiFi), a PC Card slot for Cellular Wireless (if not built in), a VGA/DVI adapter for presentations, and perhaps a FireWire cable if you are syncing a heavy media device. So what is missing?

1. SD Cards (and other media): While I love SD Cards because of their size, you don't really need a built in card slot. You can get a USB reader for the device for next to nothing on eBay (I did), and use your USB slot. Need more than one USB slot? Use a hub. That's really the reason why USB remains to this day the most popular peripheral platform.

Now, I can see great potential in using SD Card slots as hard drives. the Eee PC is able to boot off of an SD Card, presenting a great option should the SSD drive die on the machine. And perhaps there should be an SD Card slot, but what about Compact Flash? Or MMC Cards? Or all the other possible storage media out there? Should they also be supported? Wouldn't it just be easier to get a USB adapter, and use the storage when you need it? I'll leave it up to other people to debate.

2. Gaming Ports: Joysticks on a UMPC? Well, if you want to make it a truly powerful draw to the 20-somethings, it's almost a given. But do you really need a gaming port? No, not really. Leave the heavy gaming platforms to the Desktops and Laptops.

3. Optical Drives: Many people are screaming at Apple for not providing an optical drive for many of their devices. The Macbook Air has an external drive if you need it, and the Apple TV just doesn't have the option. But, when you think about it, how often (outside of required disks for gaming) do you really use the optical drive? Perhaps you burn backups every once in a while, or burn a CD from your music collection... when else do you use it?

I know that I don't use an Optical Drive much, because it just doesn't hold enough media. Instead I use network drives, or I use a USB/FireWire drive to store and transfer media. The optical drive is used on a main machine for ripping CDs and DVDs, and then I burn a hard copy of the media for backup use. That's it, nothing else. 90% of the time I don't use an optical drive. So why should I have that space taken up? It just doesn't make sense.

Software
Okay, for those of you who are married to your platform and loyal to the end, you can write this off as my Apple Fanboy rant. But, the reality of the failure in UMPCs in general in my mind is because of the software platform (i.e., Windows). Microsoft has a platform that has been riddled with bugs, security holes, and instability. Ultimately I don't blame the code so much as the core platform: It's just not UNIX. But, because of the whole package in Windows, you have an unreliable device.

Now, let's say you replace the non-UNIX platform with a UNIX platform: Which should it be? Linux is still constantly in development, playing catchup with hardware developers. Why? Not because there is a problem with the platform, but rather because Linux developers are making Linux all things to all platforms. It works, but delays the deployment on newer hardware.

Now, let's look at the Macintosh platform. Apple is able to build their machines so well because they only have set hardware specifications. Why? Because anything more would overwhelm the Mach Kernel, and place the development into the same muddy that GNU's HURD has been trying to navigate. Instead, Apple streamlines the code by streamlining their hardware offerings. That means a streamlined user experience with little to no problems with functionality.

Anyway, that is my little rant. I hope someone from Cupertino is listening, because I really think that an Apple UMPC would be a wonderful thing. Add to that the possibility to install software, and you have an even better platform. Don't overload it with too many peripherals, and you have a sleek, functional device that will get the gob done. You want a gaming platform? Get a gaming platform.

Tuesday, March 04, 2008

VMWare and Gaming: A Review

This last week I have been really looking into VMWare as an engine, and looking at potential options for WIndows gaming within the Mac. VMWare has a wonderful (if experimental) option to encode hardware 3D into the video card, if only for XP at the moment. So, I thought I would give it a go.

The Test
One game that I have been aching to try again is Ultima IX: Ascension. Not particularly because it was the best Ultima game out there (I still contend that Ultima IV was the best), but because the engine was so radical for it's time. Of course, that also means it needed some very advanced system requirements for it's time as well. How well could it run within a virtual machine? That's what I wanted to know.

First Attempt: QEMU
I first attempted the install with QEMU, and it failed. Why? Because QEMU isn't able to simulate a hardware 3D emulator (nor should it, for what it's designed to do). So this means the only method outside of Virtual PC for my PowerPC wasn't going to work. It needed something more advanced.

Second Attempt: VMWare Fusion
I next attempted VMWare fusion, and the install went clean and easy. It was quick, detected the processor without any trouble, and in XP it managed to see the Hardware 3D Graphics option without a problem (not so when trying to install it in Windows 98SE). So I tried to run it.

Performance
The display was just as I remembered, and the video clips played without a hitch. It looked like I found a viable option for my Ultima IX itch... until I actually tried to play. The mouse was not controllable, making moving about in the world very difficult. This, of course, goes to a whole different rant about requiring mouse-driven movement, but that can be answered at another time.

The Verdict
Until I figure out why the mouse isn't working, I would have to say that this almost gets there, and then crushes you right when you think you have the option sown up. It's more disappointing than having the game not install at all, because it looks like you might be able to do it before you fail miserably.

So, what am I going to do? Well, the next step is to find out why the mouse was not working properly. I may also try creating a Bootcamp partition again, booting into Windows directly, and see if it works. Ultimately, it's not the solution that I need, because it just reinforces the need for a decent Ultima game ported to the Mac environment. Perhaps it's time I learned how to do 3D game rendering, and perhaps start working on one myself.