Week 20: Starting Paper and Fixing Frontend

For last week I was in charge of preparing my group to start the user study and writing our research paper.

For the user study, following the IRB form questions, I completed a google forms (so that we could start surveying as soon as we were done with the meeting with the professor). Furthermore, I started working on writing out an outline.

Single Spaces: about 9 - 10 pages each
Abstract
Executive Summary
Introduction
  1. Ranking
    1. Definition
      1. Rankings enable the proceassing of large multifaceted data to synthesize a representation of reality, allowing individuals to weigh and observe in clearer detail the choices they are faced with.
    2. Importance
      1. Employed in everyday life and decision making: website search, picking out a place to eat, deciding on the best college
      2. Rankings are based on certain attributes. Each attribute can have a higher weight/contribution to the overall ranking. Available online rankings may hide which attribute are considered, fuzzing the meaning of the rankings.    
      3. It is important to be wary of rankings without disclosed weights and their attributes. To deflect this problem, it is vital to encourage individuals to formulate their own rankings.
    3. Machine learning techniques
      1. Manually ranking (assigning each attribute a specific weight) can be a difficult task to the individual who is unfamiliar with the whole dataset or is unsure about the importance of each attribute. Coming up with the weights requires prior knowledge-- not friendly to beginners
      2. Machine learning techniques allow for automatic rankings of datasets.
    4. RANKit
      1. RANKit is an online ranking application that provides solutions for analyzing and exploring rankings. The system uses machine learning to automatically construct rankings based on partial input from users. Intuitive interfaces allow for the effective building, exploring, and explaining of rankings.

Background
  1. Motivation
    1. Problems in ranking
      1. Misleading results
        1. Case study
        2. Needs to be a framework that helps users get an intuition for how rankings work.
      2. Fairness in ranking
        1. Groups = equal representation - statistical parity
        2. Equalized odds, if you train a model. The amount of errors you make in predicting model - should be the same for each group
        3. The ranking you assign should mean something
    2. Intuitiveness of UI
      1. Clean and simple design
        1. Small amount of text
        2. Unobtrusive buttons
        3. Colors
          1. Color blindness
      2. Common practices in most websites
        1. Tabs
        2. Buttons
        3. Hover over states
          1. Cursor changing
  2. Related Work
    1. Matters
      1. Description of tool
      2. Observations - key features
      3. How it inspired RANKit
    2. Lineup
      1. Description of tool
      2. Observations - key features
      3. How it inspired RANKit
    3. Podium
      1. Description of tool
      2. Observations - key features

Methodology
  1. Goals
    1. Encourage critical thinking and spread awareness in ranking (teach more about how rankings are formulated)
    2. Create an unbiased and easy to use tool for individuals both knowledgeable and not in the subject of interest
  2. Overview
    1. Deciding a platform type
      1. Web application
      2. Desktop application
      3. Mobile application
    2. Researching languages
      1. Javascript
        1. Why would it be useful?
        2. What are the drawbacks?
      2. Python
        1. Why would it be useful?
        2. What are the drawbacks?
      3. Hybrid of Python and Javascript
        1. Why would it be useful?
        2. What are the drawbacks?
  3. Design and Implementation
    1. Splitting the backend and frontend
      1. The Server
        1. Machine learning algorithm to determine rankings
        2. Importance of performance
      2. The Client
        1. Capture interest with a landing page
        2. Machine learning tool with a friendly UI
        3. Visualization of final ranking
  4. Interview
    1. Goals
      1. Evaluate which method of building a ranking is most favorable among the three presented
      2. Estimate an amount of partial information the user is willing to input
      3. Determine the intuitiveness of the user interface
    2. Procedure
      1. The method for gathering data will involve in-person interviews.
      2. The interviews will consist of a one-time session where the participants will be asked to perform two tasks using our ranking application. After performing each task, they will be asked to rate their overall quality of the interaction using a questionnaire.
    3. Questions
      1. How can we test the goals?
  5. Online Survey
    1. Goals
      1. Evaluate which method of building a ranking is most favorably rated  among the three Build views
    2. Procedure
      1. The method for gathering data will involve an online questionnaire participation.
      2. The interviews will consist of a one-time session where the participants will be asked to perform six tasks using the ranking application. After performing each task, they will be asked to rate their overall quality of the interaction using a questionnaire.
    3. Questions
      1. How can we test the goals?

Result
  1. Goals
    1. What we’ve done to address the below goals
      1. Encourage critical thinking and spread awareness in ranking (teach more about how rankings are formulated)
      2. Create an unbiased and easy to use tool for individuals both knowledgeable and not in the subject of interest
  2. Overview
    1. Backend language: Python
      1. Description
      2. Why we chose it
    2. Frontend language: Template Jinja2, JS
      1. Description
      2. Why we chose it
  3. Design and Implementation
    1. The Server
      1. Architecture of the system
        1. File structure
        2. Blueprint archetype
      2. The ranking script
    2. The Client
      1. Landing page
        1. Different iterations
        2. Button placement
      2. Build Methods
        1. Visual description
        2. When this can be useful
        3. Algorithm that generates pairwise
      3. Explore
        1. Widget that displays weights of each attribute (that determined the final ranking)
        2. Table features
  4. Interview
    1. Intuitiveness of the Interface
    2. Comparing Ranking Techniques
    3. Feedback
  5. Survey
    1. Data and Analysis
      1. Overview of the questions and what they are trying to test
      2. Data
      3. Analysis


Future Works
  1. Studies
  2. Features
    1. Rank by attribute
    2. Rank by multiple attributes
    3. More interactibility in Explore

Conclusions (1 page)

Appendices
Survey Questions

Interview Questions

Comments

Popular posts from this blog

Week 13: Receiving Feedback

Week 32: Datasets

Week 14: Rebuilding the UI