In the final workshop session, each session chair was charged with highlighting some of the next steps that were discussed during each session’s workshop presentations and discussions. Each session chair was asked to describe what the different disciplines (biology, computer vision, and visual analytics) should do for one another. Benjamin Richards suggested that the computer vision community provide a set of integrated software tools to extract biological data needed from existing image data; he also suggested that it develop a set of hierarchical classifiers. Mubarak Shah said that computer vision researchers should learn about the types of existing data and problems in the fisheries community. Chuck Stewart suggested that fisheries researchers identify a basic set of problems at the right scale and engage all user groups to solve it. David Jacobs emphasized the need for using real-world data, to see what can be done with imperfect tools. Hui Cheng suggested that the fisheries community prioritize a list of problems that need to be addressed.
The question was then posed to the larger audience. One participant proposed making the algorithms smart enough to understand when they do not apply well to the image under consideration; otherwise, the fisheries community may not realize when an algorithm does not match well to a given set of circumstances. Another participant emphasized the importance of dual novelty: a problem must be novel in and useful to both the fisheries community and the computer vision community for both communities to want to engage in research. Another participant suggested returning the level of uncertainty with a result, which is as important as the result
of a classifier; many current fisheries identification and classification tools do not provide a probability, only a result. Shah noted that the most common classifier, the support vector machine, provides a confidence interval.
A participant considered the potential utility of prizes and asked how these were used in the computer vision community. Computer vision does have a history of using prizes to motivate research in certain topics or on certain data sets. The cost of developing the tools to run a competition, someone noted, is far greater than the prize money itself. Another participant noted the challenge of creating the right data set in a competition; for example, if there is any bias in the data set, it can be exploited to obtain the best results, and then the results do not generalize to other data sets. Another participant cautioned that prizes do not draw a representative sampling of researchers, because established researchers may hesitate to risk their reputation in a competition.
Another participant suggested that the community focus on “low-hanging fruit”—that is, projects that would have a high likelihood of success without a large investment in resources. An example of this, the participant suggested, might be Research Experiences for Undergraduates projects.1
Several additional topics were discussed at the workshop on different occasions but not in the final concluding session. Other discussion items that were addressed by multiple speakers or participants during the course of the workshop include the following:
__________________
1 The National Science Foundation funds research opportunities for undergraduates through its Research Experiences for Undergraduates projects. For more information, see “NSF REU Program Overview Home Page,” http://www.nsf.gov/crssprgm/reu/, accessed September 26, 2014.
tools and techniques to obtain accurate measures of absolute abundance, rather than the relative measures that are more commonly available today.
___________________
2 See the website for the First Workshop on Automated Analysis of Video Data for Wildlife Surveillance, January 9, 2015, in Waikaloa Beach, Hawaii, at http://marineresearchpartners.com/avdws2015/Home.html, accessed September 22, 2014.
This page intentionally left blank.