Auburn Research Symposium Template
Author
Michael A. Alcorn
Last Updated
6 yıl önce
License
Creative Commons CC BY 4.0
Abstract
The official abstract template for the Auburn University Research Symposium.
The official abstract template for the Auburn University Research Symposium.
\documentclass[12pt]{article}
\usepackage{amsmath,amsfonts,amssymb}
\usepackage{gensymb}
\usepackage[left=2cm, right=2cm, top=2cm]{geometry}
\usepackage{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{mathptmx}
\hypersetup{
colorlinks=true,
urlcolor=blue
}
\setlength\parindent{0pt}
\begin{document}
\begin{center}
Auburn Research: Student Symposium \\ Tuesday, April 9\textsuperscript{th}, Auburn University Student Center
\end{center}
Please use this \LaTeX{} template for your abstract. The Auburn University Library provides a wealth of resources on how to edit and format \LaTeX{} documents at: \texttt{\href{https://libguides.auburn.edu/LaTeX}{https://libguides.auburn.edu/LaTeX}}.
\begin{center}
\textbf{INSTRUCTIONS}
\end{center}
\begin{enumerate}
\item Edit this template in a \LaTeX{} editor, like \href{https://www.overleaf.com}{Overleaf}. You should \textbf{only} change fields that have \texttt{\%~CHANGE THIS} above them. Do \textbf{NOT} delete any of the field names, e.g., \texttt{\textbackslash textbf\{Title:\}}. Descriptions \textbf{cannot} be more than \textbf{2,000 characters}, \emph{including spaces} (this will be automatically checked by a computer!). The title, author information, and affiliation are not part of the character count. Do not include figures or references in your abstract.
\item Proofread your abstract---it will appear as submitted!
\item Save a copy of this abstract template to your computer and label the file as \texttt{YOURLASTNAME.tex}.
\item Upload the file with your Student Symposium 2019 registration (instructions on the registration form).
\item Abstracts are due February 8, 2019, by 11:59 PM CST.
\end{enumerate}
\bigskip
% CHANGE THIS
% Use sentence case in the title
\textbf{Title:} Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects
% CHANGE THIS
% Author name should be ordered: Last name, First name, Middle initial
\textbf{Primary Author (and presenter):} Alcorn, Michael, A.
% CHANGE THIS
% Author names should be ordered: 2nd Author Last name, First name; 3rd Author Last name, First name; and so on
\textbf{Additional Authors:} Li, Qi; Gong, Zhitao; Wang, Chengfei; Mai, Long; Ku, Wei-Shinn; Nguyen, Anh;
% CHANGE THIS
% Use Title Case for the Department Name
\textbf{Department:} Department of Computer Science and Software Engineering
% CHANGE THIS
% Use Title Case for the School/College Name
\textbf{College/School:} Samuel Ginn College of Engineering
\bigskip
% CHANGE THIS
\textbf{Description:} Deep neural networks (DNNs) are increasingly common components of computer vision systems. When handling ``familiar'' data, DNNs are capable of superhuman performance; however, inputs that are dissimilar to previously encountered examples (but that are still easily recognized by humans) can cause DNNs to make catastrophic mistakes. Here, we present a framework for discovering DNN failures that harnesses 3D computer graphics. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to ``strange'' poses of well-known objects. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97\% of their pose space. Further, DNNs are highly sensitive to slight pose perturbations; for example, rotating a correctly classified object as little as $8\degree$ can often cause a DNN to misclassify. Lastly, 75\% to 99\% of adversarial poses transfer to DNNs with different architectures and/or trained with different datasets.
\end{document}