This article is a 🌱 Seedling, which means it’s still Work in progress and published in it’s raw form.More on Seedlings
Ever thought about swapping your keyboard for your voice? Dive into this piece where we chew the fat on the hustle of an old-school print shop and ponder over a future where talking to your computer isn’t just science fiction.
In the initial days of my involvement in the family business, time was money and for a printing company that meant a lot of things; from endless queue of customers, all impatiently waiting to get their documents typed, to missing documents, folders and computer viruses.
Survival in such a cut-throat environment hinged on the rapid improvement of your operators typing skills, speed, and accuracy. In essence, our business viability was tethered to the proficiency of our computer operators. This situation transformed the business landscape into a battleground against time and financial constraints. And, imagine this: we powered our operations with diesel engines to maintain a consistent supply of electricity.
In order to improve my proficiency at delivering near instant replication of hand written notes into word documents, I needed to learn touch typing and the software predominant for achieving this is Mavis Beacon
Fast forward to today, and I can’t help but envision what the future would be like when we think of Human Computer Interaction, with programs like Mavis improving how we interact with keyboards. I think we are about to enter another era.
In less than half a decade, I firmly believe that computer interactions will be redefined. We’ll likely be dealing with task-specific, adaptive interfaces projected onto the immersive screen of a VR headset, rather than staring at a static computer monitor.
Though monitors would remain in existence, I visualize a shift towards narrating our thoughts and actions in natural language or a derivative of it “pseudo-natural language”.
Why do I refer to it as a pseudo-natural language? It’s because I foresee a standardized list of semantics or UX standards becoming commonplace, especially when interacting with applications such as document editors. For instance, you could instruct the system to “rephrase the second paragraph into a more formal tone” or to “delete the section after ‘however’ in the fourth paragraph.”
Moreover, we might command a graphical software like Adobe Photoshop, Inkscape, Sketch or Figma to “remove the background on this image and add a solid color to the root layer.” The tool’s semantic keywords will persist, but our thoughts would directly and swiftly translate into actions on our devices at a scale that is almost instant.
This evolution raises an intriguing question: Do we still need Mavis Beacon and similar tools that helped us learn how to interact better with our keyboards, mouse, monitors and computer peripherals? Is there a necessity to grasp the foundational basics of machine interfaces, or is a transformation looming, making such knowledge redundant?