Not very different, unfortunately. My first thought when I saw the first
screen on my Android phone describing how to use it was "Oh yeah, Graffiti.".
Instead of hitting individual keys you draw the pattern for the symbol you want
and the handheld translates that pattern into the symbol. The box is a bit
bigger than the input box on my Visor, the symbols are whole words and I don't
need that stylus, but underneath I'll bet the actual code's remarkably similar
to what the PalmOS software used to translate Graffiti strokes into letters. And
I'll bet if you go into OCR software, some remarkably similar code shows up
there for recognizing both individual letters and entire words based on the
lines and curves in the scanned image. So the real question is, is the idea of
taking a technique long used on individual characters and expanding it to whole
words so novel, so non-obvious that it wouldn't be thought of by someone handed
the problem of making it easier to input text on a touch-screen? And to answer
that, I'd point you to the PenPoint OS which goes back to 1992 or so. It did
handwriting recognition, but that's exactly taking the path followed by the
pointer and matching the shape up to translate it into symbols. I'd say
something that was being done 30 years ago is the antithesis of novel. [ Reply to This | Parent | # ]
|