Interaction Design, Web Design, Natural Language Processing
< 1 month
It's quite cool to have a virtual personal assistant on your phone to open apps, but things get even better when they can literally do your job. While working as a freelance web developer, I often wondered about the possibility of working remotely and just instructing my device about a design instead of worrying about the coding process. I spent some time contemplating how I can make this a reality and got working.
First, I knew it would be hard to make the algorithm write both HTML and CSS code, so I decide to make a stylesheet library inspired by Bootstrap. Enter Manuscript.css.
Manuscript.css is a pre-written stylesheet that saved me time and effort styling each component. It helps anyone add simple and professional UI components to their websites. I spent about a week to make it as customizable as possible and made sure there were several design options and themes to work with.
This library allowed me to style components through HTML itself by compounding classes that contained pre-written styles.
I wouldn't go through the entire process of coding a bot that understands natural language to write HTML code because I think it'll be long and boring. To sum up the process, here is a high level explanation of the steps I took:
I wrote functions that allowed manipulation of HTML code by considering the code as plain text. These functions required certain parameters to be known (arguments) such as the HTML tag to target, the action to perform, the classes involved, the text to be added, the positioning of the element and more.
The next task was to extract the required information from a natural language instruction and supply that to the appropriate function that could perform the intended action. I implemented this using a standard word to vector conversion and used a decision tree algorithm to identify outputs.
Additionally, one could not expect every instruction to contain all the information, so some form of memory must be added to the program to remember context through past instructions. This was done by maintaining a stack of operations performed and using previous instructions to determine new actions. This information is also used to reply with relevant questions to obtain missing pieces of information.
If you wish to know more about the process, you can write to me or talk to me about it and I would be happy to share more details. I think the more interesting part is how the interaction with the bot takes place. To show my work designing this interaction, here's a quick demo.
This interaction is also transformed to a voice based interface built using the Web Speech API.
Even though there's so much more that can be done with this project, I'm quite happy with this initial version. I was able to create a completely functional website using this project, which is a great measure of success for me. In the future, I will keep iterating the design and algorithm to improve the interactivity of this project and hopefully make this public so that even people who are new to web development can use it to implement their designs.