Showing posts from December, 2017

Week 16: Catching up on Winter Break Tasks

As mentioned in the previous post, my team and I still had some tasks to finish up before the next term started.

We met up via google hangouts, and assigned different tasks for everyone.

My task was to clean up the backend code and optimize the rendering time of our explore tool. Because our tool rendered the dataset table all at once, when datasets spanned multiple pages, the landing time of the webpage could sometimes take up to 1 minute. To fix that, I focused on rendering the webpage one table page at a time, significantly reducing the time of that dataset to 11 seconds.

Week 15: Last Changes before Break

To test our product further, we looked in to questionnaires on mechanical turk and made plans to organize our own online survey.

Mechanical turk is a crowdsourcing marketplace where workers perform tasks (such as our questionnaire) and get paid per task/question. Because the testing of our tool takes about 30 to 40 minutes, it is more time and work than interviewees are willing to provide. Therefore, we want to give incentive to our interviewees to complete our tasks by posting it on mechanical turk (so they get paid per question).
After looking into a typical pay rate per question, we concluded that a payment of 15 cents per minute is to be expected. In order to shorten the online questionnaire, I ended up splitting it into three different ones, each testing a specific view of Build (list, pairwise, categorical). 
Putting in 15 cents per questions, where each questionnaire consists of 10 questions, we decided to limit our budget to 90 people. The below statistics show the approximate…

Week 14: Rebuilding the UI

Addressing the feedback we received last week, my team and I began to redesign the UI.

Firstly, in order to make the functionality of RANKit clearer, we updated the landing page to include a tutorial.

Furthermore, to address the problem of not being able to see the ranking boxes until scrolling down, we redesigned the Build UI such that the datasets is to the left of the ranking box as opposed to the top of.

As this is the last week of the term, we met up to discuss plans for the winter. Below are the list of tasks we need to accomplish. Build view functionality fixesPairwise view dataset pool keeps a copy of the removed objectDataset pool is not preserved between views Rank button should rank only the objects in the specific tab Build view UI fixesOption to pick a view once first landing on the Build viewFix Instruction placement in build view, info buttonExploreRedesign Explore UI such that you can view the attribute weights on ratingOptimizing Explore rendering timeMachine learning s…

Week 13: Receiving Feedback

User Feedback For last week we aimed to arrange an informal interview sessions with faculty from school of business and the CS department to get some input on our tool. 
In order to accomplish that tasks, we created a list of tasks that the participant will go through. After each task is completed, the participant would be asked a set of questions to encourage constructive and aimed feedback.

From this questionnaire, we focused on several major improvements.

The machine learning tool gave reversed ranking. Therefore, we decided to run a couple of sanity checks on the algorithm to further make sure that the results are as expected.

For the list comparison view, ranking more than two objects requires the user to scroll down and reveal the ranked box. To fix this, the dataset box will be moved to the side of the screen rather than the top and the ranked box will be set to the left of, as opposed to below the dataset box. Furthermore,  in order to accept the dropped object, the ranked box …