Week 28: Getting Back to Work

In order to meet the deadline, the development team has been on a strict schedule to complete all tasks necessary.

Because we will be applying to a visualization conference, Rankit needs to have more visualization features. Therefore, we've been working on integrating active learning into the tool. With active learning, our tool will have immediate and engaging feedback on the ranking as the user decides whether they should rank more items to get better results or if they are satisfied with the ranking as is and can stop.

The features I worked on were to make the Explore tool more robust. One feature is to work on highlighting the rows of the data table with a gradient signifying confidence.

The second feature was to have a bar for each row signifying the score of the object.

Week 27: Spring Break

Last week, we had our spring break, so there's no update.

Week 26: Finishing the Paper

Taking into account the feedback received from the our mentors, we updated the section analyzing the outcome of the online user study. We updated the machine learning section to include more references and added more charts to the whole paper. 
The team also discussed the next steps and observed new features to be implemented.

Week 25: Finalizing Everything

I noted that RanKit's GitHub repository was out of date. The descriptions were old and the installation process made little sense. Therefore, I worked on getting the readme and the wiki up to date with our current progress.

The biggest suggestion we got from the ongoing user study was that the dataset attribute names weren't user friendly. Going over each dataset, I updated the attribute names to be only capital letters and to not contain underscores.

I spend the rest of the time continuing reading over the paper, and providing feedback on the parts of the paper that were written last week.

Week 24: Refactoring

With the application complete and the online survey cracking along, it was time to refactor the backend to make it accessible and easy to understand for those who will take up working on the project after my team.

The code on the backend was a little out of place, so I moved all of the helper functions to different files, separated datasets routes and static files from the Build folder, and added comments throughout the project. I also had to do some refactoring on the frontend because some old routes were being used when transitioning between the Build components.

Furthermore, I worked on revising half of the paper. The rest of my team was not yet done with finishing up writing their parts, so I might need to review more of it next week.

Week 23: Revising the Paper

Finishing off my sections from last week, I was done writing. I spent the most of the week grammars checking the overall paper. Wanting to write some more information on the machine learning part of the project, I ended up reading some papers on different types of ranking.

“Learning to rank.” Learning to rank - RLScore 0.7 documentation,

Li H. (2011) A short introduction to learning to rank. IEICE Trans. Inf. Syst. , 94, 1854–1862
Liu, Tie-Yan. "Learning to rank for information retrieval." Foundations and Trends® in Information Retrieval 3, no. 3 (2009): 225-331.

Week 22: Writing the Paper

For this week, we had to make a great push for writing the paper. I assigned the following sections to everyone on the team.
By the end, I completed the Introduction and the Methodology sections.
Diana   Introduction ( ~ 3 pages)
  Methodology — Online Survey ( ~ 3 page )
  Methodology — Design and Implementation - The Server ( ~ 4 pages)   Abstract   Executive Summary
Goutham  Background — Intuitiveness of UI ( ~ 3 pages)  Methodology — Design and Implementation - The Client ( ~ 7 pages)
Zarni  Background — Related Work ( ~ 3 pages )  Evaluation — Online Survey ( ~ 4 pages)  Conclusion ( ~ 3 pages)
Malika  Methodology - Goal & Overview ( ~ 5 pages)  Methodology - Interview ( ~ 2 pages)
 Evaluation - Interview  ( ~ 3 pages)