Recent Changes

Tuesday, December 8

  1. msg Not surprising reactions message posted Not surprising reactions Overall the paper demonstrates both appropriate application of the methods and analysis, but one fe…
    Not surprising reactions
    Overall the paper demonstrates both appropriate application of the methods and analysis, but one feature I really enjoyed is to the level they describe their method and analysis. It is at a level that should make replication easy, but as they note their sampling may not be representative. Thouh given the general lack of awareness of algorithms in general it would not surprise me if they had an even greater lack of awareness. This paper does get a bit at the transparency issue in algorithms and what level should we be exposing and what about them we need to expose for informed behaviors and expectations. Too easily these algorithms could have unintended negative consequences as well as positive.



    For fun you can dive into the FeedVis code: https://github.com/haoranyu/cs467-fbVis

    I'm interested in their followup work bringing more quantitative measures to the impact.
    9:29 am

Monday, December 7

  1. msg Summary message posted Summary The paper investigated people's awareness of the algorithmic newsfeed selection, depicted people's …
    Summary
    The paper investigated people's awareness of the algorithmic newsfeed selection, depicted people's reactions towards it, and effect on future behavior. The research use a solid qualitative study method and multiple findings were described in concise manner. I liked the paper.

    Having said that, I doubt the generalizability of the findings to other situations; the concept of News Feed is specific to Facebook and I don't know how people who don't really care about research of Facebook should appreciate this work.
    11:35 pm
  2. msg Interesting message posted Interesting Reading the abstract, I find it surprising that users are unaware of algorithmic curation on facebo…
    Interesting
    Reading the abstract, I find it surprising that users are unaware of algorithmic curation on facebook and I hope the authors go over how this knowledge impacted active content management.

    Ultimately, I believe the methods used in this paper seem appropriate and I am a little surprised at my own bias: I would have assume that most users of facebook would be aware of how these algorithms work (perhaps at a super high level). I find the idea that people reacted with anger to learning about what facebook "hides" from user surprising, but given the lack of algorithmic awareness it's believable.

    I find the post-survey response rate to answer R3 to be pretty good. I wonder if participants could use FeedVis any time they wanted? Could they use it as a tool to tune their feeds? Could the authors explore this more? I would be interested in seeing this explored in the future work as proposed by the authors.

    Its interesting that this was done for facebook, but I have some doubts about surfacing algorithmic details outside of social media. In the case of search engine responses and recommendation systems, there algorithms are inherently protected by the companies that own them (e.g., Google) because they are important trade secrets. That being said, when the authors asked "what other insights might we draw from our findings to inform the design of technology?" I think this paper could have been improved had the author provided an answer via a short list of recommendations; however, I think the discussion covers a lot of this and is quite interesting.
    9:54 am

Tuesday, December 1

  1. page Fall-2015-Discussion-ReasoningAboutInvisibleAlgorithms edited Discussion for: [ACM DL IS DOWN!?] Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A.…
    Discussion for:
    [ACM DL IS DOWN!?]Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., ... & Sandvig, C. (2015, April). I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed. InProceedings of the 33rd Annual SIGCHI Conference on Human Factors in Computing Systems (pp. 153-162).
    (view changes)
    4:37 pm
  2. 4:36 pm
  3. page Fall-2015-Discussion-ReasoningAboutInvisibleAlgorithms edited Discussion for: [ACM DL IS DOWN!?]
    Discussion for:
    [ACM DL IS DOWN!?]
    (view changes)
    4:35 pm
  4. 4:34 pm
  5. msg Summary of Discussion message posted Summary of Discussion This paper is about Bird recognition (or any classification task) from images, but specifically foc…
    Summary of Discussion
    This paper is about Bird recognition (or any classification task) from images, but specifically focusing on cases where people might not be able to distinguish between the classes of birds (i.e., the difference between a sparrow and a falcon), which the authors say can only be done by experts. The authors build a dataset and compare labeling tasks between experts and non-experts (via Mechanical Turk). Some obvious results about turkers are presented, but the big takeaway is that experts (via citizen science initiatives) are free and give better classification accuracy than mechanical turkers though they have higher throughput and can perform low level tasks well (e.g., identifying parts of a bird like the wing). This claim is somewhat dubious because there isn't always a crowd of experts available to do the classification task for free. Is there really zero cost to using expert citizen scientists? It seems that this project may not be generalizable. Citizen science is great! But, is it always available? Cornell Lab of Ornithology is probably the best, most well trained citizen science group and the authors have predicated a lot of their claims based on them.

    We did discuss the main contribution of this paper: the NABird dataset. We discussed deep learning and useful application for this dataset (e.g., identification of birds, migration tracking, and even as one possible input to a deep learning algorithm); however, this trajectory largely led to interesting tangents not related to the paper. We also talked about where you could find other popular citizen science communities (e.g., Zuniverse) that has a (potentially) similar scale as the Cornell Lab of Ornithology!

    Concerning the structure of the paper, most of us enjoyed the description of the dataset creation and agree that the methodology was sound; however, we did wonder if the paper would be as well received without the comparison study between experts and turkers? We discussed a few improvements to the paper including a more detailed analysis of IRR between experts, non-experts, and other cases which seemed to be missing from this paper. We also discussed possibilities of filtering out turkers to improve results and how that might impact the study (hopefully the results would improve) and how this seems only relevant after understanding some of the IRR analysis we propose.

    Finally we discussed the potential to combine the abilities of experts and non-experts, how to better compare turkers to non-expert citizen scientists, and the impact of a person's experience (i.e., both in life and with the task itself) on results when completing HITs.
    4:29 pm
  6. msg Summary of Discussion message posted Summary of Discussion (deleted)
    4:29 pm
  7. msg Good paper - some obvious findings but interesting to read message posted Good paper - some obvious findings but interesting to read This paper makes a three different contributions. First, the authors create a new dataset NABirds …
    Good paper - some obvious findings but interesting to read
    This paper makes a three different contributions. First, the authors create a new dataset NABirds and compare the efficiency and costs of labelling via citizen scientists (experts) and crowdworkers (non-experts). Second, they explore existing datasets (CUB and ImageNet) for errors and how these errors affect learning algorithms. Finally, they show that a high quality dataset like NABirds is important to train CV algorithms.

    Like others, I found the first part of the study fairly obvious, that experts are better than non-experts in labelling and classification tasks, but I’m glad this is documented with a study (not sure if someone’s already done this before). I was surprised by the throughput of citizen scientists and I wonder if this represents an opportunity to sustain or speed up their contributions (I’m not familiar with the CS community).

    The results of the second study are more interesting. In particular, how the error in labelling for CUB and ImageNet don’t distort learning algorithms due to their size - again, seemingly obvious but good to document. I have little background in computer vision but having a well vetted and high quality testset seems like an important finding to measure the efficiency of fine grained features detection algorithms.
    9:13 am

More